2026-04-05 00:00:08.125907 | Job console starting 2026-04-05 00:00:08.145589 | Updating git repos 2026-04-05 00:00:08.368422 | Cloning repos into workspace 2026-04-05 00:00:08.614195 | Restoring repo states 2026-04-05 00:00:08.637077 | Merging changes 2026-04-05 00:00:08.637098 | Checking out repos 2026-04-05 00:00:09.172787 | Preparing playbooks 2026-04-05 00:00:10.306996 | Running Ansible setup 2026-04-05 00:00:19.052675 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-05 00:00:21.038797 | 2026-04-05 00:00:21.038928 | PLAY [Base pre] 2026-04-05 00:00:21.081876 | 2026-04-05 00:00:21.082017 | TASK [Setup log path fact] 2026-04-05 00:00:21.130509 | orchestrator | ok 2026-04-05 00:00:21.167712 | 2026-04-05 00:00:21.167834 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-05 00:00:21.243320 | orchestrator | ok 2026-04-05 00:00:21.259884 | 2026-04-05 00:00:21.259986 | TASK [emit-job-header : Print job information] 2026-04-05 00:00:21.354465 | # Job Information 2026-04-05 00:00:21.354681 | Ansible Version: 2.16.14 2026-04-05 00:00:21.354714 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-05 00:00:21.354747 | Pipeline: periodic-midnight 2026-04-05 00:00:21.354777 | Executor: 521e9411259a 2026-04-05 00:00:21.354803 | Triggered by: https://github.com/osism/testbed 2026-04-05 00:00:21.354843 | Event ID: 1928b94beaae403ebd11dd0b50186fab 2026-04-05 00:00:21.360797 | 2026-04-05 00:00:21.360885 | LOOP [emit-job-header : Print node information] 2026-04-05 00:00:21.856401 | orchestrator | ok: 2026-04-05 00:00:21.856564 | orchestrator | # Node Information 2026-04-05 00:00:21.856597 | orchestrator | Inventory Hostname: orchestrator 2026-04-05 00:00:21.856618 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-05 00:00:21.856636 | orchestrator | Username: zuul-testbed05 2026-04-05 00:00:21.856654 | orchestrator | Distro: Debian 12.13 2026-04-05 00:00:21.856673 | orchestrator | Provider: static-testbed 2026-04-05 00:00:21.856691 | orchestrator | Region: 2026-04-05 00:00:21.856709 | orchestrator | Label: testbed-orchestrator 2026-04-05 00:00:21.856726 | orchestrator | Product Name: OpenStack Nova 2026-04-05 00:00:21.856742 | orchestrator | Interface IP: 81.163.193.140 2026-04-05 00:00:21.872804 | 2026-04-05 00:00:21.872904 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-05 00:00:23.553967 | orchestrator -> localhost | changed 2026-04-05 00:00:23.561805 | 2026-04-05 00:00:23.561920 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-05 00:00:26.750160 | orchestrator -> localhost | changed 2026-04-05 00:00:26.768385 | 2026-04-05 00:00:26.768483 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-05 00:00:27.549492 | orchestrator -> localhost | ok 2026-04-05 00:00:27.555077 | 2026-04-05 00:00:27.555165 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-05 00:00:27.592444 | orchestrator | ok 2026-04-05 00:00:27.625748 | orchestrator | included: /var/lib/zuul/builds/4eaff187072e4b038e3270d3005de3d9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-05 00:00:27.667090 | 2026-04-05 00:00:27.667216 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-05 00:00:38.105465 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-05 00:00:38.105661 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/4eaff187072e4b038e3270d3005de3d9/work/4eaff187072e4b038e3270d3005de3d9_id_rsa 2026-04-05 00:00:38.105695 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/4eaff187072e4b038e3270d3005de3d9/work/4eaff187072e4b038e3270d3005de3d9_id_rsa.pub 2026-04-05 00:00:38.105717 | orchestrator -> localhost | The key fingerprint is: 2026-04-05 00:00:38.105739 | orchestrator -> localhost | SHA256:oSYlDcpUnBWNOgcHbnIZ+9BUOpF1O4UQhhFX2zNOACU zuul-build-sshkey 2026-04-05 00:00:38.105758 | orchestrator -> localhost | The key's randomart image is: 2026-04-05 00:00:38.105785 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-05 00:00:38.105803 | orchestrator -> localhost | | .o=+B@E*=.. | 2026-04-05 00:00:38.105821 | orchestrator -> localhost | | o o+O==.o.* | 2026-04-05 00:00:38.105837 | orchestrator -> localhost | | + O+* . + = | 2026-04-05 00:00:38.105853 | orchestrator -> localhost | | +o=.o . + o | 2026-04-05 00:00:38.105869 | orchestrator -> localhost | | .o+ S . | 2026-04-05 00:00:38.105891 | orchestrator -> localhost | | o | 2026-04-05 00:00:38.105908 | orchestrator -> localhost | | | 2026-04-05 00:00:38.105925 | orchestrator -> localhost | | | 2026-04-05 00:00:38.105942 | orchestrator -> localhost | | | 2026-04-05 00:00:38.105958 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-05 00:00:38.106003 | orchestrator -> localhost | ok: Runtime: 0:00:09.658726 2026-04-05 00:00:38.113489 | 2026-04-05 00:00:38.113599 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-05 00:00:38.167766 | orchestrator | ok 2026-04-05 00:00:38.186878 | orchestrator | included: /var/lib/zuul/builds/4eaff187072e4b038e3270d3005de3d9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-05 00:00:38.223327 | 2026-04-05 00:00:38.223479 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-05 00:00:38.276719 | orchestrator | skipping: Conditional result was False 2026-04-05 00:00:38.286763 | 2026-04-05 00:00:38.286908 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-05 00:00:39.278668 | orchestrator | changed 2026-04-05 00:00:39.284360 | 2026-04-05 00:00:39.284447 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-05 00:00:39.540981 | orchestrator | ok 2026-04-05 00:00:39.546231 | 2026-04-05 00:00:39.546316 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-05 00:00:40.021569 | orchestrator | ok 2026-04-05 00:00:40.026695 | 2026-04-05 00:00:40.026777 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-05 00:00:40.522074 | orchestrator | ok 2026-04-05 00:00:40.526946 | 2026-04-05 00:00:40.527020 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-05 00:00:40.573247 | orchestrator | skipping: Conditional result was False 2026-04-05 00:00:40.578790 | 2026-04-05 00:00:40.578989 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-05 00:00:41.862297 | orchestrator -> localhost | changed 2026-04-05 00:00:41.886352 | 2026-04-05 00:00:41.886454 | TASK [add-build-sshkey : Add back temp key] 2026-04-05 00:00:42.736708 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/4eaff187072e4b038e3270d3005de3d9/work/4eaff187072e4b038e3270d3005de3d9_id_rsa (zuul-build-sshkey) 2026-04-05 00:00:42.736885 | orchestrator -> localhost | ok: Runtime: 0:00:00.046860 2026-04-05 00:00:42.745069 | 2026-04-05 00:00:42.745151 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-05 00:00:43.306559 | orchestrator | ok 2026-04-05 00:00:43.311466 | 2026-04-05 00:00:43.311565 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-05 00:00:43.385188 | orchestrator | skipping: Conditional result was False 2026-04-05 00:00:43.525202 | 2026-04-05 00:00:43.525297 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-05 00:00:44.116022 | orchestrator | ok 2026-04-05 00:00:44.137264 | 2026-04-05 00:00:44.137365 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-05 00:00:44.181734 | orchestrator | ok 2026-04-05 00:00:44.215486 | 2026-04-05 00:00:44.215617 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-05 00:00:44.755245 | orchestrator -> localhost | ok 2026-04-05 00:00:44.762801 | 2026-04-05 00:00:44.762916 | TASK [validate-host : Collect information about the host] 2026-04-05 00:00:46.413165 | orchestrator | ok 2026-04-05 00:00:46.438887 | 2026-04-05 00:00:46.439011 | TASK [validate-host : Sanitize hostname] 2026-04-05 00:00:46.542329 | orchestrator | ok 2026-04-05 00:00:46.552203 | 2026-04-05 00:00:46.552311 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-05 00:00:47.603173 | orchestrator -> localhost | changed 2026-04-05 00:00:47.622377 | 2026-04-05 00:00:47.622482 | TASK [validate-host : Collect information about zuul worker] 2026-04-05 00:00:48.329931 | orchestrator | ok 2026-04-05 00:00:48.339298 | 2026-04-05 00:00:48.339413 | TASK [validate-host : Write out all zuul information for each host] 2026-04-05 00:00:49.691129 | orchestrator -> localhost | changed 2026-04-05 00:00:49.711432 | 2026-04-05 00:00:49.712033 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-05 00:00:50.080443 | orchestrator | ok 2026-04-05 00:00:50.085413 | 2026-04-05 00:00:50.085495 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-05 00:02:08.651218 | orchestrator | changed: 2026-04-05 00:02:08.651462 | orchestrator | .d..t...... src/ 2026-04-05 00:02:08.651500 | orchestrator | .d..t...... src/github.com/ 2026-04-05 00:02:08.651525 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-05 00:02:08.651548 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-05 00:02:08.651569 | orchestrator | RedHat.yml 2026-04-05 00:02:08.666877 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-05 00:02:08.666895 | orchestrator | RedHat.yml 2026-04-05 00:02:08.666948 | orchestrator | = 2.2.0"... 2026-04-05 00:02:22.620442 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-05 00:02:22.635792 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-05 00:02:22.766524 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-05 00:02:23.322091 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-05 00:02:23.389255 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-05 00:02:23.912749 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-05 00:02:24.156865 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-05 00:02:24.857318 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-05 00:02:24.857404 | orchestrator | 2026-04-05 00:02:24.857418 | orchestrator | Providers are signed by their developers. 2026-04-05 00:02:24.857426 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-05 00:02:24.857435 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-05 00:02:24.857446 | orchestrator | 2026-04-05 00:02:24.857453 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-05 00:02:24.857470 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-05 00:02:24.857478 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-05 00:02:24.857485 | orchestrator | you run "tofu init" in the future. 2026-04-05 00:02:24.857717 | orchestrator | 2026-04-05 00:02:24.857739 | orchestrator | OpenTofu has been successfully initialized! 2026-04-05 00:02:24.857747 | orchestrator | 2026-04-05 00:02:24.857753 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-05 00:02:24.857760 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-05 00:02:24.857768 | orchestrator | should now work. 2026-04-05 00:02:24.857775 | orchestrator | 2026-04-05 00:02:24.857781 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-05 00:02:24.857788 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-05 00:02:24.857795 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-05 00:02:25.049409 | orchestrator | Created and switched to workspace "ci"! 2026-04-05 00:02:25.049460 | orchestrator | 2026-04-05 00:02:25.049467 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-05 00:02:25.049473 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-05 00:02:25.049496 | orchestrator | for this configuration. 2026-04-05 00:02:25.163703 | orchestrator | ci.auto.tfvars 2026-04-05 00:02:25.635243 | orchestrator | default_custom.tf 2026-04-05 00:02:26.658111 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-05 00:02:27.206089 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-05 00:02:27.482946 | orchestrator | 2026-04-05 00:02:27.483007 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-05 00:02:27.483014 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-05 00:02:27.483019 | orchestrator | + create 2026-04-05 00:02:27.483025 | orchestrator | <= read (data resources) 2026-04-05 00:02:27.483030 | orchestrator | 2026-04-05 00:02:27.483034 | orchestrator | OpenTofu will perform the following actions: 2026-04-05 00:02:27.483050 | orchestrator | 2026-04-05 00:02:27.483054 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-05 00:02:27.483059 | orchestrator | # (config refers to values not yet known) 2026-04-05 00:02:27.483064 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-05 00:02:27.483070 | orchestrator | + checksum = (known after apply) 2026-04-05 00:02:27.483077 | orchestrator | + created_at = (known after apply) 2026-04-05 00:02:27.483084 | orchestrator | + file = (known after apply) 2026-04-05 00:02:27.483090 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483116 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.483122 | orchestrator | + min_disk_gb = (known after apply) 2026-04-05 00:02:27.483128 | orchestrator | + min_ram_mb = (known after apply) 2026-04-05 00:02:27.483134 | orchestrator | + most_recent = true 2026-04-05 00:02:27.483140 | orchestrator | + name = (known after apply) 2026-04-05 00:02:27.483146 | orchestrator | + protected = (known after apply) 2026-04-05 00:02:27.483153 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.483162 | orchestrator | + schema = (known after apply) 2026-04-05 00:02:27.483168 | orchestrator | + size_bytes = (known after apply) 2026-04-05 00:02:27.483174 | orchestrator | + tags = (known after apply) 2026-04-05 00:02:27.483180 | orchestrator | + updated_at = (known after apply) 2026-04-05 00:02:27.483186 | orchestrator | } 2026-04-05 00:02:27.483193 | orchestrator | 2026-04-05 00:02:27.483198 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-05 00:02:27.483202 | orchestrator | # (config refers to values not yet known) 2026-04-05 00:02:27.483208 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-05 00:02:27.483214 | orchestrator | + checksum = (known after apply) 2026-04-05 00:02:27.483220 | orchestrator | + created_at = (known after apply) 2026-04-05 00:02:27.483226 | orchestrator | + file = (known after apply) 2026-04-05 00:02:27.483232 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483239 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.483246 | orchestrator | + min_disk_gb = (known after apply) 2026-04-05 00:02:27.483252 | orchestrator | + min_ram_mb = (known after apply) 2026-04-05 00:02:27.483258 | orchestrator | + most_recent = true 2026-04-05 00:02:27.483264 | orchestrator | + name = (known after apply) 2026-04-05 00:02:27.483270 | orchestrator | + protected = (known after apply) 2026-04-05 00:02:27.483276 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.483282 | orchestrator | + schema = (known after apply) 2026-04-05 00:02:27.483286 | orchestrator | + size_bytes = (known after apply) 2026-04-05 00:02:27.483290 | orchestrator | + tags = (known after apply) 2026-04-05 00:02:27.483294 | orchestrator | + updated_at = (known after apply) 2026-04-05 00:02:27.483298 | orchestrator | } 2026-04-05 00:02:27.483305 | orchestrator | 2026-04-05 00:02:27.483309 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-05 00:02:27.483314 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-05 00:02:27.483318 | orchestrator | + content = (known after apply) 2026-04-05 00:02:27.483322 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:27.483326 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:27.483329 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:27.483333 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:27.483337 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:27.483341 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:27.483345 | orchestrator | + directory_permission = "0777" 2026-04-05 00:02:27.483348 | orchestrator | + file_permission = "0644" 2026-04-05 00:02:27.483352 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-05 00:02:27.483356 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483360 | orchestrator | } 2026-04-05 00:02:27.483363 | orchestrator | 2026-04-05 00:02:27.483367 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-05 00:02:27.483371 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-05 00:02:27.483375 | orchestrator | + content = (known after apply) 2026-04-05 00:02:27.483378 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:27.483382 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:27.483386 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:27.483390 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:27.483393 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:27.483402 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:27.483406 | orchestrator | + directory_permission = "0777" 2026-04-05 00:02:27.483410 | orchestrator | + file_permission = "0644" 2026-04-05 00:02:27.483418 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-05 00:02:27.483422 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483426 | orchestrator | } 2026-04-05 00:02:27.483430 | orchestrator | 2026-04-05 00:02:27.483434 | orchestrator | # local_file.inventory will be created 2026-04-05 00:02:27.483437 | orchestrator | + resource "local_file" "inventory" { 2026-04-05 00:02:27.483441 | orchestrator | + content = (known after apply) 2026-04-05 00:02:27.483445 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:27.483449 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:27.483452 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:27.483456 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:27.483460 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:27.483464 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:27.483468 | orchestrator | + directory_permission = "0777" 2026-04-05 00:02:27.483471 | orchestrator | + file_permission = "0644" 2026-04-05 00:02:27.483475 | orchestrator | + filename = "inventory.ci" 2026-04-05 00:02:27.483479 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483483 | orchestrator | } 2026-04-05 00:02:27.483486 | orchestrator | 2026-04-05 00:02:27.483490 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-05 00:02:27.483494 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-05 00:02:27.483498 | orchestrator | + content = (sensitive value) 2026-04-05 00:02:27.483501 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:27.483505 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:27.483509 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:27.483513 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:27.483516 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:27.483520 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:27.483524 | orchestrator | + directory_permission = "0700" 2026-04-05 00:02:27.483528 | orchestrator | + file_permission = "0600" 2026-04-05 00:02:27.483532 | orchestrator | + filename = ".id_rsa.ci" 2026-04-05 00:02:27.483535 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483539 | orchestrator | } 2026-04-05 00:02:27.483543 | orchestrator | 2026-04-05 00:02:27.483547 | orchestrator | # null_resource.node_semaphore will be created 2026-04-05 00:02:27.483550 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-05 00:02:27.483554 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483558 | orchestrator | } 2026-04-05 00:02:27.483564 | orchestrator | 2026-04-05 00:02:27.483567 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-05 00:02:27.483571 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-05 00:02:27.483575 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.483579 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.483582 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483586 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.483590 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.483594 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-05 00:02:27.483597 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.483601 | orchestrator | + size = 80 2026-04-05 00:02:27.483605 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.483609 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.483612 | orchestrator | } 2026-04-05 00:02:27.483616 | orchestrator | 2026-04-05 00:02:27.483620 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-05 00:02:27.483624 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:27.483627 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.483631 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.483635 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483642 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.483646 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.483649 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-05 00:02:27.483653 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.483657 | orchestrator | + size = 80 2026-04-05 00:02:27.483661 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.483664 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.483668 | orchestrator | } 2026-04-05 00:02:27.483672 | orchestrator | 2026-04-05 00:02:27.483676 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-05 00:02:27.483679 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:27.483683 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.483687 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.483691 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483694 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.483698 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.483702 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-05 00:02:27.483705 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.483709 | orchestrator | + size = 80 2026-04-05 00:02:27.483713 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.483717 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.483720 | orchestrator | } 2026-04-05 00:02:27.483724 | orchestrator | 2026-04-05 00:02:27.483728 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-05 00:02:27.483731 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:27.483735 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.483739 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.483743 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483746 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.483750 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.483754 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-05 00:02:27.483758 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.483761 | orchestrator | + size = 80 2026-04-05 00:02:27.483767 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.483771 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.483775 | orchestrator | } 2026-04-05 00:02:27.483779 | orchestrator | 2026-04-05 00:02:27.483783 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-05 00:02:27.483786 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:27.483790 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.483794 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.483797 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483801 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.483805 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.483809 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-05 00:02:27.483812 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.483816 | orchestrator | + size = 80 2026-04-05 00:02:27.483820 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.483824 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.483827 | orchestrator | } 2026-04-05 00:02:27.483833 | orchestrator | 2026-04-05 00:02:27.483837 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-05 00:02:27.483841 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:27.483845 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.483848 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.483852 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483860 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.483864 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.483867 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-05 00:02:27.483871 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.483875 | orchestrator | + size = 80 2026-04-05 00:02:27.483922 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.483926 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.483930 | orchestrator | } 2026-04-05 00:02:27.483934 | orchestrator | 2026-04-05 00:02:27.483937 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-05 00:02:27.483941 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:27.483945 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.483948 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.483952 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.483956 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.483960 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.483963 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-05 00:02:27.483967 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.483971 | orchestrator | + size = 80 2026-04-05 00:02:27.483974 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.483984 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.483987 | orchestrator | } 2026-04-05 00:02:27.483991 | orchestrator | 2026-04-05 00:02:27.483995 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-05 00:02:27.483999 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:27.484002 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.484006 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.484010 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.484013 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.484017 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-05 00:02:27.484021 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.484025 | orchestrator | + size = 20 2026-04-05 00:02:27.484028 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.484032 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.484036 | orchestrator | } 2026-04-05 00:02:27.484040 | orchestrator | 2026-04-05 00:02:27.484043 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-05 00:02:27.484047 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:27.484051 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.484054 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.484058 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.484062 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.484065 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-05 00:02:27.484069 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.484073 | orchestrator | + size = 20 2026-04-05 00:02:27.484076 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.484080 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.484084 | orchestrator | } 2026-04-05 00:02:27.484088 | orchestrator | 2026-04-05 00:02:27.484091 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-05 00:02:27.484095 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:27.484099 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.484102 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.484106 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.484110 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.484113 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-05 00:02:27.484117 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.484124 | orchestrator | + size = 20 2026-04-05 00:02:27.484128 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.484132 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.484136 | orchestrator | } 2026-04-05 00:02:27.484139 | orchestrator | 2026-04-05 00:02:27.484143 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-05 00:02:27.484147 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:27.484150 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.484154 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.484158 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.484164 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.484168 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-05 00:02:27.484172 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.484175 | orchestrator | + size = 20 2026-04-05 00:02:27.484179 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.484183 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.484186 | orchestrator | } 2026-04-05 00:02:27.490141 | orchestrator | 2026-04-05 00:02:27.490237 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-05 00:02:27.490258 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:27.490275 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.490291 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.490307 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.490323 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.490339 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-05 00:02:27.490355 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.490371 | orchestrator | + size = 20 2026-04-05 00:02:27.490386 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.490402 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.490417 | orchestrator | } 2026-04-05 00:02:27.490433 | orchestrator | 2026-04-05 00:02:27.490449 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-05 00:02:27.490465 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:27.490481 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.490497 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.490512 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.490528 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.490545 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-05 00:02:27.490561 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.490575 | orchestrator | + size = 20 2026-04-05 00:02:27.490591 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.490605 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.490619 | orchestrator | } 2026-04-05 00:02:27.490634 | orchestrator | 2026-04-05 00:02:27.490649 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-05 00:02:27.490664 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:27.490687 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.490702 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.490718 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.490732 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.490746 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-05 00:02:27.490760 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.490773 | orchestrator | + size = 20 2026-04-05 00:02:27.490787 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.490801 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.490815 | orchestrator | } 2026-04-05 00:02:27.490829 | orchestrator | 2026-04-05 00:02:27.490843 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-05 00:02:27.490858 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:27.490924 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.490940 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.490954 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.490968 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.490982 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-05 00:02:27.490996 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.491010 | orchestrator | + size = 20 2026-04-05 00:02:27.491024 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.491038 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.491052 | orchestrator | } 2026-04-05 00:02:27.491066 | orchestrator | 2026-04-05 00:02:27.491080 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-05 00:02:27.491095 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:27.491109 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:27.491123 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.491137 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.491151 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:27.491165 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-05 00:02:27.491179 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.491193 | orchestrator | + size = 20 2026-04-05 00:02:27.491207 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:27.491221 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:27.491235 | orchestrator | } 2026-04-05 00:02:27.491249 | orchestrator | 2026-04-05 00:02:27.491263 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-05 00:02:27.491278 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-05 00:02:27.491292 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:27.491306 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:27.491320 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:27.491334 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.491348 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.491362 | orchestrator | + config_drive = true 2026-04-05 00:02:27.491388 | orchestrator | + created = (known after apply) 2026-04-05 00:02:27.491404 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:27.491419 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-05 00:02:27.491433 | orchestrator | + force_delete = false 2026-04-05 00:02:27.491448 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:27.491463 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.491478 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.491493 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:27.491508 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:27.491523 | orchestrator | + name = "testbed-manager" 2026-04-05 00:02:27.491538 | orchestrator | + power_state = "active" 2026-04-05 00:02:27.491553 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.491567 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:27.491582 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:27.491597 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:27.491612 | orchestrator | + user_data = (sensitive value) 2026-04-05 00:02:27.491627 | orchestrator | 2026-04-05 00:02:27.491642 | orchestrator | + block_device { 2026-04-05 00:02:27.491657 | orchestrator | + boot_index = 0 2026-04-05 00:02:27.491673 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:27.491688 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:27.491704 | orchestrator | + multiattach = false 2026-04-05 00:02:27.491740 | orchestrator | + source_type = "volume" 2026-04-05 00:02:27.491756 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.491782 | orchestrator | } 2026-04-05 00:02:27.491798 | orchestrator | 2026-04-05 00:02:27.491813 | orchestrator | + network { 2026-04-05 00:02:27.491829 | orchestrator | + access_network = false 2026-04-05 00:02:27.491844 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:27.491860 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:27.491875 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:27.491911 | orchestrator | + name = (known after apply) 2026-04-05 00:02:27.491928 | orchestrator | + port = (known after apply) 2026-04-05 00:02:27.491944 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.491959 | orchestrator | } 2026-04-05 00:02:27.491975 | orchestrator | } 2026-04-05 00:02:27.491990 | orchestrator | 2026-04-05 00:02:27.492005 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-05 00:02:27.492021 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:27.492037 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:27.492060 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:27.492077 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:27.492093 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.492108 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.492125 | orchestrator | + config_drive = true 2026-04-05 00:02:27.492141 | orchestrator | + created = (known after apply) 2026-04-05 00:02:27.492157 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:27.492173 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:27.492189 | orchestrator | + force_delete = false 2026-04-05 00:02:27.492205 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:27.492223 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.492237 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.492252 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:27.492269 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:27.492283 | orchestrator | + name = "testbed-node-0" 2026-04-05 00:02:27.492299 | orchestrator | + power_state = "active" 2026-04-05 00:02:27.492315 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.492329 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:27.492344 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:27.492358 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:27.492374 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:27.492389 | orchestrator | 2026-04-05 00:02:27.492403 | orchestrator | + block_device { 2026-04-05 00:02:27.492417 | orchestrator | + boot_index = 0 2026-04-05 00:02:27.492431 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:27.492446 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:27.492461 | orchestrator | + multiattach = false 2026-04-05 00:02:27.492475 | orchestrator | + source_type = "volume" 2026-04-05 00:02:27.492491 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.492505 | orchestrator | } 2026-04-05 00:02:27.492519 | orchestrator | 2026-04-05 00:02:27.492534 | orchestrator | + network { 2026-04-05 00:02:27.492548 | orchestrator | + access_network = false 2026-04-05 00:02:27.492563 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:27.492576 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:27.492591 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:27.492606 | orchestrator | + name = (known after apply) 2026-04-05 00:02:27.492620 | orchestrator | + port = (known after apply) 2026-04-05 00:02:27.492636 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.492649 | orchestrator | } 2026-04-05 00:02:27.492665 | orchestrator | } 2026-04-05 00:02:27.492680 | orchestrator | 2026-04-05 00:02:27.492696 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-05 00:02:27.492711 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:27.492728 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:27.492772 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:27.492789 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:27.492805 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.492820 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.492834 | orchestrator | + config_drive = true 2026-04-05 00:02:27.492850 | orchestrator | + created = (known after apply) 2026-04-05 00:02:27.492866 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:27.492881 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:27.492974 | orchestrator | + force_delete = false 2026-04-05 00:02:27.492992 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:27.493008 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.493025 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.493040 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:27.493054 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:27.493067 | orchestrator | + name = "testbed-node-1" 2026-04-05 00:02:27.493080 | orchestrator | + power_state = "active" 2026-04-05 00:02:27.493093 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.493106 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:27.493119 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:27.493132 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:27.493156 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:27.493170 | orchestrator | 2026-04-05 00:02:27.493183 | orchestrator | + block_device { 2026-04-05 00:02:27.493196 | orchestrator | + boot_index = 0 2026-04-05 00:02:27.493209 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:27.493222 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:27.493235 | orchestrator | + multiattach = false 2026-04-05 00:02:27.493248 | orchestrator | + source_type = "volume" 2026-04-05 00:02:27.493261 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.493274 | orchestrator | } 2026-04-05 00:02:27.493287 | orchestrator | 2026-04-05 00:02:27.493300 | orchestrator | + network { 2026-04-05 00:02:27.493313 | orchestrator | + access_network = false 2026-04-05 00:02:27.493326 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:27.493339 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:27.493351 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:27.493365 | orchestrator | + name = (known after apply) 2026-04-05 00:02:27.493377 | orchestrator | + port = (known after apply) 2026-04-05 00:02:27.493407 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.493420 | orchestrator | } 2026-04-05 00:02:27.493433 | orchestrator | } 2026-04-05 00:02:27.493447 | orchestrator | 2026-04-05 00:02:27.493460 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-05 00:02:27.493473 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:27.493486 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:27.493499 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:27.493516 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:27.493529 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.493541 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.493553 | orchestrator | + config_drive = true 2026-04-05 00:02:27.493565 | orchestrator | + created = (known after apply) 2026-04-05 00:02:27.493578 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:27.493590 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:27.493603 | orchestrator | + force_delete = false 2026-04-05 00:02:27.493617 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:27.493629 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.493642 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.493667 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:27.493681 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:27.493694 | orchestrator | + name = "testbed-node-2" 2026-04-05 00:02:27.493707 | orchestrator | + power_state = "active" 2026-04-05 00:02:27.493720 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.493733 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:27.493747 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:27.493760 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:27.493774 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:27.493787 | orchestrator | 2026-04-05 00:02:27.493800 | orchestrator | + block_device { 2026-04-05 00:02:27.493814 | orchestrator | + boot_index = 0 2026-04-05 00:02:27.493827 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:27.493839 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:27.493851 | orchestrator | + multiattach = false 2026-04-05 00:02:27.493864 | orchestrator | + source_type = "volume" 2026-04-05 00:02:27.493876 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.493889 | orchestrator | } 2026-04-05 00:02:27.493926 | orchestrator | 2026-04-05 00:02:27.493937 | orchestrator | + network { 2026-04-05 00:02:27.493950 | orchestrator | + access_network = false 2026-04-05 00:02:27.493962 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:27.493973 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:27.493985 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:27.493998 | orchestrator | + name = (known after apply) 2026-04-05 00:02:27.494010 | orchestrator | + port = (known after apply) 2026-04-05 00:02:27.498119 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.498137 | orchestrator | } 2026-04-05 00:02:27.498142 | orchestrator | } 2026-04-05 00:02:27.498147 | orchestrator | 2026-04-05 00:02:27.498160 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-05 00:02:27.498166 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:27.498171 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:27.498175 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:27.498179 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:27.498183 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.498187 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.498191 | orchestrator | + config_drive = true 2026-04-05 00:02:27.498195 | orchestrator | + created = (known after apply) 2026-04-05 00:02:27.498199 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:27.498202 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:27.498206 | orchestrator | + force_delete = false 2026-04-05 00:02:27.498210 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:27.498214 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.498218 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.498221 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:27.498225 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:27.498229 | orchestrator | + name = "testbed-node-3" 2026-04-05 00:02:27.498232 | orchestrator | + power_state = "active" 2026-04-05 00:02:27.498236 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.498240 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:27.498244 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:27.498247 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:27.498251 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:27.498255 | orchestrator | 2026-04-05 00:02:27.498259 | orchestrator | + block_device { 2026-04-05 00:02:27.498263 | orchestrator | + boot_index = 0 2026-04-05 00:02:27.498267 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:27.498271 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:27.498284 | orchestrator | + multiattach = false 2026-04-05 00:02:27.498288 | orchestrator | + source_type = "volume" 2026-04-05 00:02:27.498291 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.498295 | orchestrator | } 2026-04-05 00:02:27.498299 | orchestrator | 2026-04-05 00:02:27.498303 | orchestrator | + network { 2026-04-05 00:02:27.498307 | orchestrator | + access_network = false 2026-04-05 00:02:27.498310 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:27.498314 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:27.498318 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:27.498322 | orchestrator | + name = (known after apply) 2026-04-05 00:02:27.498325 | orchestrator | + port = (known after apply) 2026-04-05 00:02:27.498329 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.498333 | orchestrator | } 2026-04-05 00:02:27.498337 | orchestrator | } 2026-04-05 00:02:27.498340 | orchestrator | 2026-04-05 00:02:27.498345 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-05 00:02:27.498348 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:27.498352 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:27.498356 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:27.498360 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:27.498364 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.498367 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.498371 | orchestrator | + config_drive = true 2026-04-05 00:02:27.498387 | orchestrator | + created = (known after apply) 2026-04-05 00:02:27.498391 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:27.498395 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:27.498399 | orchestrator | + force_delete = false 2026-04-05 00:02:27.498402 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:27.498406 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.498410 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.498414 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:27.498417 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:27.498421 | orchestrator | + name = "testbed-node-4" 2026-04-05 00:02:27.498425 | orchestrator | + power_state = "active" 2026-04-05 00:02:27.498428 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.498432 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:27.498436 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:27.498439 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:27.498443 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:27.498447 | orchestrator | 2026-04-05 00:02:27.498451 | orchestrator | + block_device { 2026-04-05 00:02:27.498455 | orchestrator | + boot_index = 0 2026-04-05 00:02:27.498459 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:27.498462 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:27.498466 | orchestrator | + multiattach = false 2026-04-05 00:02:27.498470 | orchestrator | + source_type = "volume" 2026-04-05 00:02:27.498473 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.498477 | orchestrator | } 2026-04-05 00:02:27.498481 | orchestrator | 2026-04-05 00:02:27.498485 | orchestrator | + network { 2026-04-05 00:02:27.498488 | orchestrator | + access_network = false 2026-04-05 00:02:27.498492 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:27.498496 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:27.498500 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:27.498503 | orchestrator | + name = (known after apply) 2026-04-05 00:02:27.498507 | orchestrator | + port = (known after apply) 2026-04-05 00:02:27.498511 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.498514 | orchestrator | } 2026-04-05 00:02:27.498518 | orchestrator | } 2026-04-05 00:02:27.498525 | orchestrator | 2026-04-05 00:02:27.498529 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-05 00:02:27.498533 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:27.498537 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:27.498540 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:27.498544 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:27.498548 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.498551 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:27.498555 | orchestrator | + config_drive = true 2026-04-05 00:02:27.498559 | orchestrator | + created = (known after apply) 2026-04-05 00:02:27.498563 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:27.498566 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:27.498570 | orchestrator | + force_delete = false 2026-04-05 00:02:27.498574 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:27.498578 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.498581 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:27.498585 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:27.498589 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:27.498592 | orchestrator | + name = "testbed-node-5" 2026-04-05 00:02:27.498596 | orchestrator | + power_state = "active" 2026-04-05 00:02:27.498600 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.498603 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:27.498607 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:27.498611 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:27.498615 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:27.498618 | orchestrator | 2026-04-05 00:02:27.498622 | orchestrator | + block_device { 2026-04-05 00:02:27.498626 | orchestrator | + boot_index = 0 2026-04-05 00:02:27.498629 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:27.498633 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:27.498637 | orchestrator | + multiattach = false 2026-04-05 00:02:27.498640 | orchestrator | + source_type = "volume" 2026-04-05 00:02:27.498644 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.498648 | orchestrator | } 2026-04-05 00:02:27.498652 | orchestrator | 2026-04-05 00:02:27.498655 | orchestrator | + network { 2026-04-05 00:02:27.498659 | orchestrator | + access_network = false 2026-04-05 00:02:27.498663 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:27.498667 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:27.498670 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:27.498674 | orchestrator | + name = (known after apply) 2026-04-05 00:02:27.498678 | orchestrator | + port = (known after apply) 2026-04-05 00:02:27.498682 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:27.498685 | orchestrator | } 2026-04-05 00:02:27.498689 | orchestrator | } 2026-04-05 00:02:27.498693 | orchestrator | 2026-04-05 00:02:27.498696 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-05 00:02:27.498700 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-05 00:02:27.498704 | orchestrator | + fingerprint = (known after apply) 2026-04-05 00:02:27.498708 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.498711 | orchestrator | + name = "testbed" 2026-04-05 00:02:27.498715 | orchestrator | + private_key = (sensitive value) 2026-04-05 00:02:27.498719 | orchestrator | + public_key = (known after apply) 2026-04-05 00:02:27.498723 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.498726 | orchestrator | + user_id = (known after apply) 2026-04-05 00:02:27.498730 | orchestrator | } 2026-04-05 00:02:27.498734 | orchestrator | 2026-04-05 00:02:27.498738 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-05 00:02:27.498741 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:27.498748 | orchestrator | + device = (known after apply) 2026-04-05 00:02:27.498752 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.498755 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:27.498759 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.498766 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:27.498769 | orchestrator | } 2026-04-05 00:02:27.498773 | orchestrator | 2026-04-05 00:02:27.498777 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-05 00:02:27.498785 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:27.498789 | orchestrator | + device = (known after apply) 2026-04-05 00:02:27.498793 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.498796 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:27.498800 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.498804 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:27.498807 | orchestrator | } 2026-04-05 00:02:27.498811 | orchestrator | 2026-04-05 00:02:27.498815 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-05 00:02:27.498819 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:27.498823 | orchestrator | + device = (known after apply) 2026-04-05 00:02:27.498826 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.498830 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:27.498834 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.498837 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:27.498841 | orchestrator | } 2026-04-05 00:02:27.498845 | orchestrator | 2026-04-05 00:02:27.498849 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-05 00:02:27.498852 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:27.498856 | orchestrator | + device = (known after apply) 2026-04-05 00:02:27.498860 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.498864 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:27.498867 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.498871 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:27.498875 | orchestrator | } 2026-04-05 00:02:27.498879 | orchestrator | 2026-04-05 00:02:27.498882 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-05 00:02:27.498886 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:27.498901 | orchestrator | + device = (known after apply) 2026-04-05 00:02:27.498905 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.498908 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:27.498912 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.498916 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:27.498919 | orchestrator | } 2026-04-05 00:02:27.498923 | orchestrator | 2026-04-05 00:02:27.498927 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-05 00:02:27.498931 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:27.498934 | orchestrator | + device = (known after apply) 2026-04-05 00:02:27.498938 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.498942 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:27.498946 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.498949 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:27.498953 | orchestrator | } 2026-04-05 00:02:27.498957 | orchestrator | 2026-04-05 00:02:27.498960 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-05 00:02:27.498964 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:27.498968 | orchestrator | + device = (known after apply) 2026-04-05 00:02:27.498992 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.498996 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:27.499000 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.499007 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:27.499011 | orchestrator | } 2026-04-05 00:02:27.499015 | orchestrator | 2026-04-05 00:02:27.499018 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-05 00:02:27.499022 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:27.499026 | orchestrator | + device = (known after apply) 2026-04-05 00:02:27.499033 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.499037 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:27.499041 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.499044 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:27.499048 | orchestrator | } 2026-04-05 00:02:27.499052 | orchestrator | 2026-04-05 00:02:27.499056 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-05 00:02:27.499060 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:27.499063 | orchestrator | + device = (known after apply) 2026-04-05 00:02:27.499067 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.499071 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:27.499075 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.499078 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:27.499082 | orchestrator | } 2026-04-05 00:02:27.499086 | orchestrator | 2026-04-05 00:02:27.499090 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-05 00:02:27.499094 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-05 00:02:27.499098 | orchestrator | + fixed_ip = (known after apply) 2026-04-05 00:02:27.499135 | orchestrator | + floating_ip = (known after apply) 2026-04-05 00:02:27.499139 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.499143 | orchestrator | + port_id = (known after apply) 2026-04-05 00:02:27.499146 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.499159 | orchestrator | } 2026-04-05 00:02:27.499198 | orchestrator | 2026-04-05 00:02:27.499203 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-05 00:02:27.499261 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-05 00:02:27.499290 | orchestrator | + address = (known after apply) 2026-04-05 00:02:27.499358 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.499448 | orchestrator | + dns_domain = (known after apply) 2026-04-05 00:02:27.499494 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:27.499498 | orchestrator | + fixed_ip = (known after apply) 2026-04-05 00:02:27.499502 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.499506 | orchestrator | + pool = "public" 2026-04-05 00:02:27.499510 | orchestrator | + port_id = (known after apply) 2026-04-05 00:02:27.499513 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.499517 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:27.499521 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.499525 | orchestrator | } 2026-04-05 00:02:27.499528 | orchestrator | 2026-04-05 00:02:27.499532 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-05 00:02:27.499536 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-05 00:02:27.499543 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:27.499547 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.499551 | orchestrator | + availability_zone_hints = [ 2026-04-05 00:02:27.499555 | orchestrator | + "nova", 2026-04-05 00:02:27.499559 | orchestrator | ] 2026-04-05 00:02:27.499563 | orchestrator | + dns_domain = (known after apply) 2026-04-05 00:02:27.499566 | orchestrator | + external = (known after apply) 2026-04-05 00:02:27.499570 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.499574 | orchestrator | + mtu = (known after apply) 2026-04-05 00:02:27.499578 | orchestrator | + name = "net-testbed-management" 2026-04-05 00:02:27.499581 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:27.499589 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:27.499593 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.499596 | orchestrator | + shared = (known after apply) 2026-04-05 00:02:27.499600 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.499604 | orchestrator | + transparent_vlan = (known after apply) 2026-04-05 00:02:27.499608 | orchestrator | 2026-04-05 00:02:27.499611 | orchestrator | + segments (known after apply) 2026-04-05 00:02:27.499615 | orchestrator | } 2026-04-05 00:02:27.499619 | orchestrator | 2026-04-05 00:02:27.499623 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-05 00:02:27.499626 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-05 00:02:27.499652 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:27.499656 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:27.499660 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:27.499664 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.499667 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:27.499671 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:27.499675 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:27.499678 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:27.499682 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.499686 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:27.499690 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:27.499694 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:27.499697 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:27.499701 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.499705 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:27.499708 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.499712 | orchestrator | 2026-04-05 00:02:27.499716 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.499720 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:27.499723 | orchestrator | } 2026-04-05 00:02:27.499727 | orchestrator | 2026-04-05 00:02:27.499731 | orchestrator | + binding (known after apply) 2026-04-05 00:02:27.499735 | orchestrator | 2026-04-05 00:02:27.499739 | orchestrator | + fixed_ip { 2026-04-05 00:02:27.499742 | orchestrator | + ip_address = "192.168.16.5" 2026-04-05 00:02:27.499746 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:27.499750 | orchestrator | } 2026-04-05 00:02:27.499753 | orchestrator | } 2026-04-05 00:02:27.499757 | orchestrator | 2026-04-05 00:02:27.499761 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-05 00:02:27.499765 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:27.499768 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:27.499772 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:27.499776 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:27.499780 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.499783 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:27.499787 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:27.499791 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:27.499795 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:27.499798 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.499802 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:27.499806 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:27.499809 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:27.499813 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:27.499817 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.499824 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:27.499828 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.499831 | orchestrator | 2026-04-05 00:02:27.499835 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.499839 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:27.499843 | orchestrator | } 2026-04-05 00:02:27.499846 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.499850 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:27.499854 | orchestrator | } 2026-04-05 00:02:27.499858 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.499861 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:27.499865 | orchestrator | } 2026-04-05 00:02:27.499869 | orchestrator | 2026-04-05 00:02:27.499873 | orchestrator | + binding (known after apply) 2026-04-05 00:02:27.499876 | orchestrator | 2026-04-05 00:02:27.499880 | orchestrator | + fixed_ip { 2026-04-05 00:02:27.499884 | orchestrator | + ip_address = "192.168.16.10" 2026-04-05 00:02:27.499888 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:27.499901 | orchestrator | } 2026-04-05 00:02:27.499905 | orchestrator | } 2026-04-05 00:02:27.499909 | orchestrator | 2026-04-05 00:02:27.499913 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-05 00:02:27.499917 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:27.499923 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:27.499927 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:27.499931 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:27.499935 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.499938 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:27.499942 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:27.499946 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:27.499950 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:27.499954 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.499957 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:27.499965 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:27.499969 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:27.499972 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:27.499976 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.499980 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:27.499984 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.499987 | orchestrator | 2026-04-05 00:02:27.499991 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.499995 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:27.499999 | orchestrator | } 2026-04-05 00:02:27.500002 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500006 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:27.500010 | orchestrator | } 2026-04-05 00:02:27.500014 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500017 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:27.500021 | orchestrator | } 2026-04-05 00:02:27.500025 | orchestrator | 2026-04-05 00:02:27.500028 | orchestrator | + binding (known after apply) 2026-04-05 00:02:27.500032 | orchestrator | 2026-04-05 00:02:27.500036 | orchestrator | + fixed_ip { 2026-04-05 00:02:27.500040 | orchestrator | + ip_address = "192.168.16.11" 2026-04-05 00:02:27.500043 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:27.500047 | orchestrator | } 2026-04-05 00:02:27.500051 | orchestrator | } 2026-04-05 00:02:27.500054 | orchestrator | 2026-04-05 00:02:27.500058 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-05 00:02:27.500062 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:27.500066 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:27.500070 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:27.500073 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:27.500077 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.500084 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:27.500088 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:27.500092 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:27.500095 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:27.500099 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.500103 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:27.500107 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:27.500110 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:27.500114 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:27.500118 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.500121 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:27.500125 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.500129 | orchestrator | 2026-04-05 00:02:27.500133 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500136 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:27.500140 | orchestrator | } 2026-04-05 00:02:27.500144 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500148 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:27.500151 | orchestrator | } 2026-04-05 00:02:27.500155 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500159 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:27.500163 | orchestrator | } 2026-04-05 00:02:27.500166 | orchestrator | 2026-04-05 00:02:27.500170 | orchestrator | + binding (known after apply) 2026-04-05 00:02:27.500174 | orchestrator | 2026-04-05 00:02:27.500177 | orchestrator | + fixed_ip { 2026-04-05 00:02:27.500181 | orchestrator | + ip_address = "192.168.16.12" 2026-04-05 00:02:27.500185 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:27.500189 | orchestrator | } 2026-04-05 00:02:27.500192 | orchestrator | } 2026-04-05 00:02:27.500196 | orchestrator | 2026-04-05 00:02:27.500200 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-05 00:02:27.500204 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:27.500207 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:27.500211 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:27.500215 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:27.500219 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.500222 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:27.500226 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:27.500230 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:27.500233 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:27.500237 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.500241 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:27.500245 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:27.500248 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:27.500252 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:27.500256 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.500260 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:27.500263 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.500267 | orchestrator | 2026-04-05 00:02:27.500271 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500275 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:27.500278 | orchestrator | } 2026-04-05 00:02:27.500282 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500286 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:27.500290 | orchestrator | } 2026-04-05 00:02:27.500293 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500297 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:27.500301 | orchestrator | } 2026-04-05 00:02:27.500304 | orchestrator | 2026-04-05 00:02:27.500311 | orchestrator | + binding (known after apply) 2026-04-05 00:02:27.500315 | orchestrator | 2026-04-05 00:02:27.500318 | orchestrator | + fixed_ip { 2026-04-05 00:02:27.500322 | orchestrator | + ip_address = "192.168.16.13" 2026-04-05 00:02:27.500326 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:27.500330 | orchestrator | } 2026-04-05 00:02:27.500333 | orchestrator | } 2026-04-05 00:02:27.500337 | orchestrator | 2026-04-05 00:02:27.500341 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-05 00:02:27.500345 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:27.500348 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:27.500352 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:27.500356 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:27.500360 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.500364 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:27.500367 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:27.500371 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:27.500378 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:27.500386 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.500390 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:27.500394 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:27.500398 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:27.500402 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:27.500405 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.500409 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:27.500413 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.500419 | orchestrator | 2026-04-05 00:02:27.500423 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500429 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:27.500433 | orchestrator | } 2026-04-05 00:02:27.500437 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500441 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:27.500445 | orchestrator | } 2026-04-05 00:02:27.500448 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500452 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:27.500456 | orchestrator | } 2026-04-05 00:02:27.500460 | orchestrator | 2026-04-05 00:02:27.500463 | orchestrator | + binding (known after apply) 2026-04-05 00:02:27.500467 | orchestrator | 2026-04-05 00:02:27.500471 | orchestrator | + fixed_ip { 2026-04-05 00:02:27.500475 | orchestrator | + ip_address = "192.168.16.14" 2026-04-05 00:02:27.500478 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:27.500482 | orchestrator | } 2026-04-05 00:02:27.500486 | orchestrator | } 2026-04-05 00:02:27.500490 | orchestrator | 2026-04-05 00:02:27.500493 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-05 00:02:27.500497 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:27.500501 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:27.500505 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:27.500509 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:27.500512 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.500516 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:27.500520 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:27.500524 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:27.500527 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:27.500531 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.500535 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:27.500539 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:27.500542 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:27.500546 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:27.500553 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.500557 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:27.500560 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.500564 | orchestrator | 2026-04-05 00:02:27.500568 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500572 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:27.500575 | orchestrator | } 2026-04-05 00:02:27.500579 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500583 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:27.500586 | orchestrator | } 2026-04-05 00:02:27.500590 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:27.500594 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:27.500598 | orchestrator | } 2026-04-05 00:02:27.500601 | orchestrator | 2026-04-05 00:02:27.500605 | orchestrator | + binding (known after apply) 2026-04-05 00:02:27.500609 | orchestrator | 2026-04-05 00:02:27.500613 | orchestrator | + fixed_ip { 2026-04-05 00:02:27.500617 | orchestrator | + ip_address = "192.168.16.15" 2026-04-05 00:02:27.500620 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:27.500624 | orchestrator | } 2026-04-05 00:02:27.500628 | orchestrator | } 2026-04-05 00:02:27.500631 | orchestrator | 2026-04-05 00:02:27.500635 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-05 00:02:27.500639 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-05 00:02:27.500643 | orchestrator | + force_destroy = false 2026-04-05 00:02:27.500647 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.500650 | orchestrator | + port_id = (known after apply) 2026-04-05 00:02:27.500654 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.500658 | orchestrator | + router_id = (known after apply) 2026-04-05 00:02:27.500662 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:27.500665 | orchestrator | } 2026-04-05 00:02:27.500669 | orchestrator | 2026-04-05 00:02:27.500673 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-05 00:02:27.500677 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-05 00:02:27.500680 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:27.500684 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.500688 | orchestrator | + availability_zone_hints = [ 2026-04-05 00:02:27.500692 | orchestrator | + "nova", 2026-04-05 00:02:27.500695 | orchestrator | ] 2026-04-05 00:02:27.500699 | orchestrator | + distributed = (known after apply) 2026-04-05 00:02:27.500703 | orchestrator | + enable_snat = (known after apply) 2026-04-05 00:02:27.500707 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-05 00:02:27.500710 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-05 00:02:27.500714 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.500718 | orchestrator | + name = "testbed" 2026-04-05 00:02:27.500722 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.500725 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.500729 | orchestrator | 2026-04-05 00:02:27.500733 | orchestrator | + external_fixed_ip (known after apply) 2026-04-05 00:02:27.500737 | orchestrator | } 2026-04-05 00:02:27.500741 | orchestrator | 2026-04-05 00:02:27.500745 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-05 00:02:27.500748 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-05 00:02:27.500752 | orchestrator | + description = "ssh" 2026-04-05 00:02:27.500756 | orchestrator | + direction = "ingress" 2026-04-05 00:02:27.500760 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:27.500763 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.500767 | orchestrator | + port_range_max = 22 2026-04-05 00:02:27.500771 | orchestrator | + port_range_min = 22 2026-04-05 00:02:27.500774 | orchestrator | + protocol = "tcp" 2026-04-05 00:02:27.500778 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.500785 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:27.500791 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:27.500795 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:27.500799 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:27.500803 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.500807 | orchestrator | } 2026-04-05 00:02:27.500810 | orchestrator | 2026-04-05 00:02:27.500814 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-05 00:02:27.500818 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-05 00:02:27.500822 | orchestrator | + description = "wireguard" 2026-04-05 00:02:27.500826 | orchestrator | + direction = "ingress" 2026-04-05 00:02:27.500829 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:27.500833 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.500837 | orchestrator | + port_range_max = 51820 2026-04-05 00:02:27.500840 | orchestrator | + port_range_min = 51820 2026-04-05 00:02:27.500844 | orchestrator | + protocol = "udp" 2026-04-05 00:02:27.500848 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.500852 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:27.500855 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:27.500859 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:27.500863 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:27.500867 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.500870 | orchestrator | } 2026-04-05 00:02:27.500874 | orchestrator | 2026-04-05 00:02:27.500878 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-05 00:02:27.500882 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-05 00:02:27.500888 | orchestrator | + direction = "ingress" 2026-04-05 00:02:27.500921 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:27.500925 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.500929 | orchestrator | + protocol = "tcp" 2026-04-05 00:02:27.500932 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.500936 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:27.500940 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:27.500944 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-05 00:02:27.500947 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:27.500951 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.500955 | orchestrator | } 2026-04-05 00:02:27.500959 | orchestrator | 2026-04-05 00:02:27.500963 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-05 00:02:27.500966 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-05 00:02:27.500970 | orchestrator | + direction = "ingress" 2026-04-05 00:02:27.500974 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:27.500977 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.500981 | orchestrator | + protocol = "udp" 2026-04-05 00:02:27.500985 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.500989 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:27.500993 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:27.500996 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-05 00:02:27.501000 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:27.501004 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.501007 | orchestrator | } 2026-04-05 00:02:27.501011 | orchestrator | 2026-04-05 00:02:27.501015 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-05 00:02:27.501023 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-05 00:02:27.501026 | orchestrator | + direction = "ingress" 2026-04-05 00:02:27.501030 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:27.501034 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.501037 | orchestrator | + protocol = "icmp" 2026-04-05 00:02:27.501041 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.501045 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:27.501049 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:27.501052 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:27.501056 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:27.501060 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.501063 | orchestrator | } 2026-04-05 00:02:27.501067 | orchestrator | 2026-04-05 00:02:27.501071 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-05 00:02:27.501075 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-05 00:02:27.501079 | orchestrator | + direction = "ingress" 2026-04-05 00:02:27.501082 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:27.501086 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.501090 | orchestrator | + protocol = "tcp" 2026-04-05 00:02:27.501094 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.501097 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:27.501101 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:27.501105 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:27.501109 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:27.501112 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.501116 | orchestrator | } 2026-04-05 00:02:27.501120 | orchestrator | 2026-04-05 00:02:27.501124 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-05 00:02:27.501127 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-05 00:02:27.501131 | orchestrator | + direction = "ingress" 2026-04-05 00:02:27.501135 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:27.501139 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.501142 | orchestrator | + protocol = "udp" 2026-04-05 00:02:27.501149 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.501153 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:27.501157 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:27.501160 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:27.501164 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:27.501168 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.501171 | orchestrator | } 2026-04-05 00:02:27.501175 | orchestrator | 2026-04-05 00:02:27.501179 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-05 00:02:27.501183 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-05 00:02:27.501186 | orchestrator | + direction = "ingress" 2026-04-05 00:02:27.501190 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:27.501194 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.501198 | orchestrator | + protocol = "icmp" 2026-04-05 00:02:27.501201 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.501205 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:27.501209 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:27.501212 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:27.501216 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:27.501220 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.501227 | orchestrator | } 2026-04-05 00:02:27.501231 | orchestrator | 2026-04-05 00:02:27.501234 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-05 00:02:27.501238 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-05 00:02:27.501242 | orchestrator | + description = "vrrp" 2026-04-05 00:02:27.501246 | orchestrator | + direction = "ingress" 2026-04-05 00:02:27.501249 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:27.501253 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.501257 | orchestrator | + protocol = "112" 2026-04-05 00:02:27.501261 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.501264 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:27.501268 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:27.501272 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:27.501275 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:27.501279 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.501283 | orchestrator | } 2026-04-05 00:02:27.501287 | orchestrator | 2026-04-05 00:02:27.501290 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-05 00:02:27.501294 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-05 00:02:27.501298 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.501301 | orchestrator | + description = "management security group" 2026-04-05 00:02:27.501305 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.501309 | orchestrator | + name = "testbed-management" 2026-04-05 00:02:27.501313 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.501316 | orchestrator | + stateful = (known after apply) 2026-04-05 00:02:27.501320 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.501324 | orchestrator | } 2026-04-05 00:02:27.501327 | orchestrator | 2026-04-05 00:02:27.501331 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-05 00:02:27.501335 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-05 00:02:27.501339 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.501342 | orchestrator | + description = "node security group" 2026-04-05 00:02:27.501346 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.501350 | orchestrator | + name = "testbed-node" 2026-04-05 00:02:27.501353 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.501357 | orchestrator | + stateful = (known after apply) 2026-04-05 00:02:27.501361 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.501365 | orchestrator | } 2026-04-05 00:02:27.501368 | orchestrator | 2026-04-05 00:02:27.501372 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-05 00:02:27.501376 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-05 00:02:27.501418 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:27.501423 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-05 00:02:27.501427 | orchestrator | + dns_nameservers = [ 2026-04-05 00:02:27.501431 | orchestrator | + "8.8.8.8", 2026-04-05 00:02:27.501434 | orchestrator | + "9.9.9.9", 2026-04-05 00:02:27.501455 | orchestrator | ] 2026-04-05 00:02:27.501460 | orchestrator | + enable_dhcp = true 2026-04-05 00:02:27.501464 | orchestrator | + gateway_ip = (known after apply) 2026-04-05 00:02:27.501508 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.501521 | orchestrator | + ip_version = 4 2026-04-05 00:02:27.501525 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-05 00:02:27.501529 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-05 00:02:27.501532 | orchestrator | + name = "subnet-testbed-management" 2026-04-05 00:02:27.501536 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:27.501540 | orchestrator | + no_gateway = false 2026-04-05 00:02:27.501544 | orchestrator | + region = (known after apply) 2026-04-05 00:02:27.501547 | orchestrator | + service_types = (known after apply) 2026-04-05 00:02:27.501587 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:27.501605 | orchestrator | 2026-04-05 00:02:27.501687 | orchestrator | + allocation_pool { 2026-04-05 00:02:27.501692 | orchestrator | + end = "192.168.31.250" 2026-04-05 00:02:27.501695 | orchestrator | + start = "192.168.31.200" 2026-04-05 00:02:27.501699 | orchestrator | } 2026-04-05 00:02:27.501703 | orchestrator | } 2026-04-05 00:02:27.501707 | orchestrator | 2026-04-05 00:02:27.501711 | orchestrator | # terraform_data.image will be created 2026-04-05 00:02:27.501714 | orchestrator | + resource "terraform_data" "image" { 2026-04-05 00:02:27.501731 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.501765 | orchestrator | + input = "Ubuntu 24.04" 2026-04-05 00:02:27.501769 | orchestrator | + output = (known after apply) 2026-04-05 00:02:27.501773 | orchestrator | } 2026-04-05 00:02:27.501849 | orchestrator | 2026-04-05 00:02:27.501854 | orchestrator | # terraform_data.image_node will be created 2026-04-05 00:02:27.501865 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-05 00:02:27.501885 | orchestrator | + id = (known after apply) 2026-04-05 00:02:27.501889 | orchestrator | + input = "Ubuntu 24.04" 2026-04-05 00:02:27.501909 | orchestrator | + output = (known after apply) 2026-04-05 00:02:27.501913 | orchestrator | } 2026-04-05 00:02:27.501916 | orchestrator | 2026-04-05 00:02:27.501920 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-05 00:02:27.501924 | orchestrator | 2026-04-05 00:02:27.501927 | orchestrator | Changes to Outputs: 2026-04-05 00:02:27.501931 | orchestrator | + manager_address = (sensitive value) 2026-04-05 00:02:27.501935 | orchestrator | + private_key = (sensitive value) 2026-04-05 00:02:27.837504 | orchestrator | terraform_data.image: Creating... 2026-04-05 00:02:27.837572 | orchestrator | terraform_data.image_node: Creating... 2026-04-05 00:02:28.013078 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=ec6e87ed-716a-879e-d653-16bd150a5106] 2026-04-05 00:02:28.013435 | orchestrator | terraform_data.image: Creation complete after 0s [id=a307eb1d-0693-268c-738a-d0dc0ef0518b] 2026-04-05 00:02:28.046320 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-05 00:02:28.046482 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-05 00:02:28.049070 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-05 00:02:28.049437 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-05 00:02:28.053826 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-05 00:02:28.054975 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-05 00:02:28.055204 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-05 00:02:28.055886 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-05 00:02:28.063011 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-05 00:02:28.064445 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-05 00:02:28.589244 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-04-05 00:02:28.597048 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-05 00:02:29.751881 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=7f6229ad-b097-4a6a-a896-20bf2d54ea14] 2026-04-05 00:02:29.756556 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-05 00:02:29.821650 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-05 00:02:29.824620 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-05 00:02:29.876187 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-05 00:02:29.886554 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-05 00:02:29.890443 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=9e45148ae15c1b8eb11dfaed2c25ab5f00544ef1] 2026-04-05 00:02:29.896515 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-05 00:02:29.899840 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=60a3558975fb864a29a64f179fdac4c742c085fd] 2026-04-05 00:02:29.906094 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-05 00:02:30.658590 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=1ca8413e-c8cb-4e27-ade1-8154b2df3f5e] 2026-04-05 00:02:30.663201 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-05 00:02:31.679564 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba] 2026-04-05 00:02:31.684231 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-05 00:02:31.724216 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=62ed18a5-03b2-4cb7-a868-d43e6cb85064] 2026-04-05 00:02:31.728114 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-05 00:02:31.746228 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=831c674b-a7a8-4a18-9cfe-2b7acfd18a4e] 2026-04-05 00:02:31.747393 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-05 00:02:31.771137 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=26a11086-b273-42dd-aa8f-9644b133a637] 2026-04-05 00:02:31.849938 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2] 2026-04-05 00:02:31.849990 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-05 00:02:31.849996 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-05 00:02:31.850001 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=caeb3c42-c4b8-40bd-8e18-9e72dc321772] 2026-04-05 00:02:31.850006 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=4c017526-66b5-4804-9f5d-05d3d9a7b1e0] 2026-04-05 00:02:31.850010 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-05 00:02:31.850032 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-05 00:02:31.850037 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=dde5ff38-a1e5-4746-bab1-211109e78654] 2026-04-05 00:02:31.853345 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=160e21cb-7f36-4211-96c7-9609d25dd0e2] 2026-04-05 00:02:34.025492 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=3ccf04e2-60ac-4e1d-9501-51c6c11a3555] 2026-04-05 00:02:35.157974 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=b4825ea0-ddc9-4dcd-98b7-2aee45b23bac] 2026-04-05 00:02:35.242599 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=d10d19df-84d5-4f9c-9dff-ab89b235cba9] 2026-04-05 00:02:35.253562 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=37f0d12f-2bb4-42f9-a6b7-b33c691698f3] 2026-04-05 00:02:35.266251 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5] 2026-04-05 00:02:35.274300 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=32eac9d0-a992-4d68-8b8e-00fece3b4884] 2026-04-05 00:02:35.284849 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=77f7e0f2-85c8-48ef-ab3c-0b23e9070d00] 2026-04-05 00:02:35.564432 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=3d43b3aa-0d83-4d55-88a4-4c98d473848b] 2026-04-05 00:02:35.569018 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-05 00:02:35.583985 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-05 00:02:35.584039 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-05 00:02:35.792206 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=45828be5-5e6a-4c14-a632-bd4b700e31e4] 2026-04-05 00:02:35.811171 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-05 00:02:35.811248 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-05 00:02:35.811257 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-05 00:02:35.811283 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-05 00:02:35.817724 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-05 00:02:35.822147 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-05 00:02:35.830864 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=4590d63f-b572-4ae1-9caf-3e63ef58a9db] 2026-04-05 00:02:35.838131 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-05 00:02:35.840088 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-05 00:02:35.848481 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-05 00:02:36.023490 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=c6ecfd72-d163-485a-8cf7-956ade4173f5] 2026-04-05 00:02:36.036923 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-05 00:02:36.224128 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=2293d15a-0432-4521-94a5-058ccd465876] 2026-04-05 00:02:36.238407 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-05 00:02:36.290247 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=41c6ff16-4877-423f-8a4f-1c765b9ac9df] 2026-04-05 00:02:36.303427 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-05 00:02:36.623393 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=1698d8ae-07f8-48cb-9258-8bc4769d9719] 2026-04-05 00:02:36.633329 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-05 00:02:36.704529 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=05ef1163-77a9-48ee-896a-fadbef341a47] 2026-04-05 00:02:36.718446 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-05 00:02:37.078586 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=09d1bbf8-edb2-415d-aed9-2ac5f421501a] 2026-04-05 00:02:37.087889 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-05 00:02:37.100802 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=4fdf1ad3-6a80-4cef-9986-bb334a18fd51] 2026-04-05 00:02:37.106073 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-05 00:02:37.111951 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=64d0a3e6-a662-4d73-85fb-f4e535ba42a8] 2026-04-05 00:02:37.657821 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=1edfb2ff-141b-4964-9d0a-fc924a2bb878] 2026-04-05 00:02:37.675176 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=a933e9e4-6671-4727-a750-f089ba19a94d] 2026-04-05 00:02:37.784541 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=9154e648-6d79-437b-80a9-3a0b72247bbe] 2026-04-05 00:02:37.993996 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=fb48f17f-2862-4c40-9608-e6791fb88941] 2026-04-05 00:02:38.038353 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=1d0adfef-ef26-47fb-96aa-e98150ccf5f5] 2026-04-05 00:02:38.279485 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=dd4fe4fb-99aa-44da-abfa-788824866f46] 2026-04-05 00:02:39.256734 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 3s [id=0f8552f9-0440-4f30-897c-2769859c0d03] 2026-04-05 00:02:39.273000 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=1e180f9c-0cc5-44ab-8069-a82338af134a] 2026-04-05 00:02:39.296017 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-05 00:02:39.304176 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-05 00:02:39.311704 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-05 00:02:39.313150 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-05 00:02:39.319631 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-05 00:02:39.323495 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-05 00:02:39.326438 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-05 00:02:39.442874 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 3s [id=ce30412a-4d5f-4df5-b47a-5739635d0706] 2026-04-05 00:02:41.639512 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=77fac6ca-df42-4915-855c-0a1cd42f5c64] 2026-04-05 00:02:41.648794 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-05 00:02:41.656866 | orchestrator | local_file.inventory: Creating... 2026-04-05 00:02:41.659014 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-05 00:02:41.663475 | orchestrator | local_file.inventory: Creation complete after 0s [id=ff76510ae8301715ee1aa9e0334b691ccdd34934] 2026-04-05 00:02:41.665833 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=288208ec366d024343ef3d63e3fc9fead89f9970] 2026-04-05 00:02:42.729314 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=77fac6ca-df42-4915-855c-0a1cd42f5c64] 2026-04-05 00:02:49.306187 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-05 00:02:49.316609 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-05 00:02:49.316902 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-05 00:02:49.323034 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-05 00:02:49.326206 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-05 00:02:49.327365 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-05 00:02:59.315513 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-05 00:02:59.317706 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-05 00:02:59.317836 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-05 00:02:59.324227 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-05 00:02:59.326298 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-05 00:02:59.328500 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-05 00:03:09.325305 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-05 00:03:09.325403 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-05 00:03:09.325418 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-05 00:03:09.325442 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-05 00:03:09.326383 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-05 00:03:09.329718 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-05 00:03:09.989575 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=fc64c460-730e-45cf-a219-3792aa8c5d49] 2026-04-05 00:03:10.053889 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=89e95367-ba4f-446b-8f60-1dcc03fd705c] 2026-04-05 00:03:10.192619 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=0d35c3b4-c878-4aad-b245-3ceb53646c02] 2026-04-05 00:03:10.242850 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=d392e119-2ca7-4fa7-98a5-50c7255e2e9c] 2026-04-05 00:03:19.333758 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-05 00:03:19.333841 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-05 00:03:20.448750 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=01bba145-37fb-4f28-88a1-dbd0b480a217] 2026-04-05 00:03:29.338409 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-04-05 00:03:30.560944 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 52s [id=4fe04f03-1fef-4cc7-be9e-a1a1e91dcfed] 2026-04-05 00:03:30.593641 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-05 00:03:30.594394 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-05 00:03:30.598559 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-05 00:03:30.600980 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=5679223766729102235] 2026-04-05 00:03:30.615152 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-05 00:03:30.622113 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-05 00:03:30.628170 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-05 00:03:30.628226 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-05 00:03:30.629727 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-05 00:03:30.636131 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-05 00:03:30.649838 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-05 00:03:30.654055 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-05 00:03:34.006115 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=89e95367-ba4f-446b-8f60-1dcc03fd705c/831c674b-a7a8-4a18-9cfe-2b7acfd18a4e] 2026-04-05 00:03:34.019376 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=d392e119-2ca7-4fa7-98a5-50c7255e2e9c/26a11086-b273-42dd-aa8f-9644b133a637] 2026-04-05 00:03:34.053159 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=0d35c3b4-c878-4aad-b245-3ceb53646c02/160e21cb-7f36-4211-96c7-9609d25dd0e2] 2026-04-05 00:03:34.053803 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=89e95367-ba4f-446b-8f60-1dcc03fd705c/62ed18a5-03b2-4cb7-a868-d43e6cb85064] 2026-04-05 00:03:34.080836 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=d392e119-2ca7-4fa7-98a5-50c7255e2e9c/4c017526-66b5-4804-9f5d-05d3d9a7b1e0] 2026-04-05 00:03:40.161559 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=0d35c3b4-c878-4aad-b245-3ceb53646c02/e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba] 2026-04-05 00:03:40.162158 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=89e95367-ba4f-446b-8f60-1dcc03fd705c/caeb3c42-c4b8-40bd-8e18-9e72dc321772] 2026-04-05 00:03:40.193770 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=d392e119-2ca7-4fa7-98a5-50c7255e2e9c/dde5ff38-a1e5-4746-bab1-211109e78654] 2026-04-05 00:03:40.194311 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=0d35c3b4-c878-4aad-b245-3ceb53646c02/a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2] 2026-04-05 00:03:40.655490 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-05 00:03:50.655881 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-05 00:03:51.231747 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=3d16c77c-bb20-4d70-8be0-09113e560510] 2026-04-05 00:03:51.250134 | orchestrator | 2026-04-05 00:03:51.250256 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-05 00:03:51.250285 | orchestrator | 2026-04-05 00:03:51.250333 | orchestrator | Outputs: 2026-04-05 00:03:51.250354 | orchestrator | 2026-04-05 00:03:51.250373 | orchestrator | manager_address = 2026-04-05 00:03:51.250394 | orchestrator | private_key = 2026-04-05 00:03:51.508333 | orchestrator | ok: Runtime: 0:01:28.836682 2026-04-05 00:03:51.528644 | 2026-04-05 00:03:51.528786 | TASK [Create infrastructure (stable)] 2026-04-05 00:03:52.063083 | orchestrator | skipping: Conditional result was False 2026-04-05 00:03:52.074441 | 2026-04-05 00:03:52.074621 | TASK [Fetch manager address] 2026-04-05 00:03:52.548936 | orchestrator | ok 2026-04-05 00:03:52.561172 | 2026-04-05 00:03:52.561316 | TASK [Set manager_host address] 2026-04-05 00:03:52.642217 | orchestrator | ok 2026-04-05 00:03:52.653002 | 2026-04-05 00:03:52.653136 | LOOP [Update ansible collections] 2026-04-05 00:03:53.572265 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-05 00:03:53.572735 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:03:53.572778 | orchestrator | Starting galaxy collection install process 2026-04-05 00:03:53.572804 | orchestrator | Process install dependency map 2026-04-05 00:03:53.572826 | orchestrator | Starting collection install process 2026-04-05 00:03:53.572847 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-04-05 00:03:53.572874 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-04-05 00:03:53.572905 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-05 00:03:53.573119 | orchestrator | ok: Item: commons Runtime: 0:00:00.594931 2026-04-05 00:03:54.583466 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-05 00:03:54.583664 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:03:54.583701 | orchestrator | Starting galaxy collection install process 2026-04-05 00:03:54.583725 | orchestrator | Process install dependency map 2026-04-05 00:03:54.583748 | orchestrator | Starting collection install process 2026-04-05 00:03:54.583769 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-04-05 00:03:54.583790 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-04-05 00:03:54.583809 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-05 00:03:54.583890 | orchestrator | ok: Item: services Runtime: 0:00:00.742437 2026-04-05 00:03:54.608829 | 2026-04-05 00:03:54.608986 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-05 00:04:06.412569 | orchestrator | ok 2026-04-05 00:04:06.426342 | 2026-04-05 00:04:06.426492 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-05 00:05:06.471098 | orchestrator | ok 2026-04-05 00:05:06.480966 | 2026-04-05 00:05:06.481080 | TASK [Fetch manager ssh hostkey] 2026-04-05 00:05:08.055484 | orchestrator | Output suppressed because no_log was given 2026-04-05 00:05:08.071481 | 2026-04-05 00:05:08.071691 | TASK [Get ssh keypair from terraform environment] 2026-04-05 00:05:08.610582 | orchestrator | ok: Runtime: 0:00:00.017981 2026-04-05 00:05:08.629524 | 2026-04-05 00:05:08.629708 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-05 00:05:08.674193 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-05 00:05:08.684125 | 2026-04-05 00:05:08.684259 | TASK [Run manager part 0] 2026-04-05 00:05:09.544502 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:05:09.584288 | orchestrator | 2026-04-05 00:05:09.584321 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-05 00:05:09.584328 | orchestrator | 2026-04-05 00:05:09.584339 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-05 00:05:13.367769 | orchestrator | ok: [testbed-manager] 2026-04-05 00:05:13.367807 | orchestrator | 2026-04-05 00:05:13.367827 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-05 00:05:13.367836 | orchestrator | 2026-04-05 00:05:13.367845 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:05:15.305161 | orchestrator | ok: [testbed-manager] 2026-04-05 00:05:15.305202 | orchestrator | 2026-04-05 00:05:15.305209 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-05 00:05:16.003621 | orchestrator | ok: [testbed-manager] 2026-04-05 00:05:16.003703 | orchestrator | 2026-04-05 00:05:16.003729 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-05 00:05:16.070203 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:05:16.070245 | orchestrator | 2026-04-05 00:05:16.070255 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-05 00:05:16.104920 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:05:16.104957 | orchestrator | 2026-04-05 00:05:16.104965 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-05 00:05:16.139311 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:05:16.139347 | orchestrator | 2026-04-05 00:05:16.139353 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-05 00:05:16.872912 | orchestrator | changed: [testbed-manager] 2026-04-05 00:05:16.872967 | orchestrator | 2026-04-05 00:05:16.872979 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-05 00:08:42.326444 | orchestrator | changed: [testbed-manager] 2026-04-05 00:08:42.326499 | orchestrator | 2026-04-05 00:08:42.326510 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-05 00:10:14.142898 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:14.142990 | orchestrator | 2026-04-05 00:10:14.143010 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-05 00:10:35.022908 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:35.023008 | orchestrator | 2026-04-05 00:10:35.023060 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-05 00:10:44.899032 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:44.972131 | orchestrator | 2026-04-05 00:10:44.972193 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-05 00:10:44.972223 | orchestrator | ok: [testbed-manager] 2026-04-05 00:10:44.972233 | orchestrator | 2026-04-05 00:10:44.972245 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-05 00:10:45.890651 | orchestrator | ok: [testbed-manager] 2026-04-05 00:10:45.890723 | orchestrator | 2026-04-05 00:10:45.890733 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-05 00:10:46.672206 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:46.672279 | orchestrator | 2026-04-05 00:10:46.672293 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-05 00:10:53.404932 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:53.404974 | orchestrator | 2026-04-05 00:10:53.404997 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-05 00:10:59.924428 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:59.924520 | orchestrator | 2026-04-05 00:10:59.924528 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-05 00:11:02.840016 | orchestrator | changed: [testbed-manager] 2026-04-05 00:11:02.840085 | orchestrator | 2026-04-05 00:11:02.840095 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-05 00:11:04.693147 | orchestrator | changed: [testbed-manager] 2026-04-05 00:11:04.694766 | orchestrator | 2026-04-05 00:11:04.694806 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-05 00:11:05.860661 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-05 00:11:05.860723 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-05 00:11:05.860731 | orchestrator | 2026-04-05 00:11:05.860741 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-05 00:11:05.907587 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-05 00:11:05.907694 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-05 00:11:05.907711 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-05 00:11:05.907725 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-05 00:11:09.393680 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-05 00:11:09.393732 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-05 00:11:09.393738 | orchestrator | 2026-04-05 00:11:09.393744 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-05 00:11:09.991217 | orchestrator | changed: [testbed-manager] 2026-04-05 00:11:09.991331 | orchestrator | 2026-04-05 00:11:09.991348 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-05 00:14:31.244291 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-05 00:14:31.244385 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-05 00:14:31.244398 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-05 00:14:31.244408 | orchestrator | 2026-04-05 00:14:31.244418 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-05 00:14:33.661185 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-05 00:14:33.661275 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-05 00:14:33.661290 | orchestrator | 2026-04-05 00:14:33.661304 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-05 00:14:33.661317 | orchestrator | 2026-04-05 00:14:33.661328 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:14:35.149167 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:35.149202 | orchestrator | 2026-04-05 00:14:35.149208 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-05 00:14:35.197410 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:35.197459 | orchestrator | 2026-04-05 00:14:35.197466 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-05 00:14:35.264022 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:35.264055 | orchestrator | 2026-04-05 00:14:35.264061 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-05 00:14:36.127103 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:36.127140 | orchestrator | 2026-04-05 00:14:36.127150 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-05 00:14:36.881321 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:36.881365 | orchestrator | 2026-04-05 00:14:36.881372 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-05 00:14:38.338190 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-05 00:14:38.338249 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-05 00:14:38.338254 | orchestrator | 2026-04-05 00:14:38.338259 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-05 00:14:39.878224 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:39.878280 | orchestrator | 2026-04-05 00:14:39.878286 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-05 00:14:41.759792 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:14:41.759844 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-05 00:14:41.760069 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:14:41.760084 | orchestrator | 2026-04-05 00:14:41.760093 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-05 00:14:41.825316 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:41.825361 | orchestrator | 2026-04-05 00:14:41.825372 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-05 00:14:41.911159 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:41.911196 | orchestrator | 2026-04-05 00:14:41.911202 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-05 00:14:42.528717 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:42.528771 | orchestrator | 2026-04-05 00:14:42.528779 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-05 00:14:42.601914 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:42.601989 | orchestrator | 2026-04-05 00:14:42.602000 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-05 00:14:43.491783 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:14:43.491870 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:43.491888 | orchestrator | 2026-04-05 00:14:43.491902 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-05 00:14:43.526537 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:43.526610 | orchestrator | 2026-04-05 00:14:43.526626 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-05 00:14:43.557817 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:43.557897 | orchestrator | 2026-04-05 00:14:43.557922 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-05 00:14:43.586184 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:43.586236 | orchestrator | 2026-04-05 00:14:43.586248 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-05 00:14:43.658389 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:43.658417 | orchestrator | 2026-04-05 00:14:43.658423 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-05 00:14:44.344158 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:44.344184 | orchestrator | 2026-04-05 00:14:44.344189 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-05 00:14:44.344194 | orchestrator | 2026-04-05 00:14:44.344199 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:14:45.683448 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:45.684312 | orchestrator | 2026-04-05 00:14:45.684346 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-05 00:14:46.686152 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:46.686191 | orchestrator | 2026-04-05 00:14:46.686196 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:14:46.686203 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-05 00:14:46.686207 | orchestrator | 2026-04-05 00:14:47.120022 | orchestrator | ok: Runtime: 0:09:37.819077 2026-04-05 00:14:47.137118 | 2026-04-05 00:14:47.137271 | TASK [Point out that the log in on the manager is now possible] 2026-04-05 00:14:47.184250 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-05 00:14:47.193895 | 2026-04-05 00:14:47.194059 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-05 00:14:47.228035 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-05 00:14:47.237325 | 2026-04-05 00:14:47.237456 | TASK [Run manager part 1 + 2] 2026-04-05 00:14:48.145357 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:14:48.202908 | orchestrator | 2026-04-05 00:14:48.203023 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-05 00:14:48.203055 | orchestrator | 2026-04-05 00:14:48.203114 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:14:51.327608 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:51.327696 | orchestrator | 2026-04-05 00:14:51.327787 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-05 00:14:51.365930 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:51.365986 | orchestrator | 2026-04-05 00:14:51.366054 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-05 00:14:51.401716 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:51.401777 | orchestrator | 2026-04-05 00:14:51.401788 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 00:14:51.434323 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:51.434362 | orchestrator | 2026-04-05 00:14:51.434370 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 00:14:51.507470 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:51.507525 | orchestrator | 2026-04-05 00:14:51.507533 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 00:14:51.585709 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:51.585786 | orchestrator | 2026-04-05 00:14:51.585795 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 00:14:51.644701 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-05 00:14:51.644770 | orchestrator | 2026-04-05 00:14:51.644781 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 00:14:52.479598 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:52.479638 | orchestrator | 2026-04-05 00:14:52.479646 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 00:14:52.533229 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:52.533271 | orchestrator | 2026-04-05 00:14:52.533278 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 00:14:54.008613 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:54.008719 | orchestrator | 2026-04-05 00:14:54.008739 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 00:14:54.619384 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:54.619435 | orchestrator | 2026-04-05 00:14:54.619447 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 00:14:55.838359 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:55.838422 | orchestrator | 2026-04-05 00:14:55.838440 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 00:15:12.123117 | orchestrator | changed: [testbed-manager] 2026-04-05 00:15:12.123183 | orchestrator | 2026-04-05 00:15:12.123198 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-05 00:15:12.838552 | orchestrator | ok: [testbed-manager] 2026-04-05 00:15:12.838602 | orchestrator | 2026-04-05 00:15:12.838614 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-05 00:15:12.888663 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:15:12.888730 | orchestrator | 2026-04-05 00:15:12.888780 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-05 00:15:13.961093 | orchestrator | changed: [testbed-manager] 2026-04-05 00:15:13.961158 | orchestrator | 2026-04-05 00:15:13.961174 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-05 00:15:14.954358 | orchestrator | changed: [testbed-manager] 2026-04-05 00:15:14.954447 | orchestrator | 2026-04-05 00:15:14.954463 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-05 00:15:15.563439 | orchestrator | changed: [testbed-manager] 2026-04-05 00:15:15.563506 | orchestrator | 2026-04-05 00:15:15.563518 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-05 00:15:15.607508 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-05 00:15:15.607577 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-05 00:15:15.607589 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-05 00:15:15.607594 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-05 00:15:17.815334 | orchestrator | changed: [testbed-manager] 2026-04-05 00:15:17.815419 | orchestrator | 2026-04-05 00:15:17.815431 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-05 00:15:27.327357 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-05 00:15:27.327428 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-05 00:15:27.327443 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-05 00:15:27.327456 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-05 00:15:27.327477 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-05 00:15:27.327488 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-05 00:15:27.327499 | orchestrator | 2026-04-05 00:15:27.327511 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-05 00:15:28.480332 | orchestrator | changed: [testbed-manager] 2026-04-05 00:15:28.480384 | orchestrator | 2026-04-05 00:15:28.480396 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-05 00:15:31.600678 | orchestrator | changed: [testbed-manager] 2026-04-05 00:15:31.600802 | orchestrator | 2026-04-05 00:15:31.600820 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-05 00:15:31.638458 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:15:31.638548 | orchestrator | 2026-04-05 00:15:31.638573 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-05 00:17:16.563105 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:16.563203 | orchestrator | 2026-04-05 00:17:16.563220 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 00:17:17.782165 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:17.782203 | orchestrator | 2026-04-05 00:17:17.782211 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:17:17.782218 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-05 00:17:17.782223 | orchestrator | 2026-04-05 00:17:18.376642 | orchestrator | ok: Runtime: 0:02:30.306936 2026-04-05 00:17:18.393773 | 2026-04-05 00:17:18.393930 | TASK [Reboot manager] 2026-04-05 00:17:19.932005 | orchestrator | ok: Runtime: 0:00:01.001902 2026-04-05 00:17:19.947227 | 2026-04-05 00:17:19.947378 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-05 00:17:36.338923 | orchestrator | ok 2026-04-05 00:17:36.348345 | 2026-04-05 00:17:36.348472 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-05 00:18:36.396884 | orchestrator | ok 2026-04-05 00:18:36.407087 | 2026-04-05 00:18:36.407224 | TASK [Deploy manager + bootstrap nodes] 2026-04-05 00:18:39.204317 | orchestrator | 2026-04-05 00:18:39.204527 | orchestrator | # DEPLOY MANAGER 2026-04-05 00:18:39.204559 | orchestrator | 2026-04-05 00:18:39.204578 | orchestrator | + set -e 2026-04-05 00:18:39.204595 | orchestrator | + echo 2026-04-05 00:18:39.204649 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-05 00:18:39.204671 | orchestrator | + echo 2026-04-05 00:18:39.204729 | orchestrator | + cat /opt/manager-vars.sh 2026-04-05 00:18:39.208042 | orchestrator | export NUMBER_OF_NODES=6 2026-04-05 00:18:39.208131 | orchestrator | 2026-04-05 00:18:39.208150 | orchestrator | export CEPH_VERSION=reef 2026-04-05 00:18:39.208165 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-05 00:18:39.208177 | orchestrator | export MANAGER_VERSION=latest 2026-04-05 00:18:39.208206 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-04-05 00:18:39.208218 | orchestrator | 2026-04-05 00:18:39.208236 | orchestrator | export ARA=false 2026-04-05 00:18:39.208248 | orchestrator | export DEPLOY_MODE=manager 2026-04-05 00:18:39.208265 | orchestrator | export TEMPEST=true 2026-04-05 00:18:39.208277 | orchestrator | export IS_ZUUL=true 2026-04-05 00:18:39.208288 | orchestrator | 2026-04-05 00:18:39.208306 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 00:18:39.208317 | orchestrator | export EXTERNAL_API=false 2026-04-05 00:18:39.208328 | orchestrator | 2026-04-05 00:18:39.208338 | orchestrator | export IMAGE_USER=ubuntu 2026-04-05 00:18:39.208353 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-05 00:18:39.208364 | orchestrator | 2026-04-05 00:18:39.208375 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-05 00:18:39.208397 | orchestrator | 2026-04-05 00:18:39.208409 | orchestrator | + echo 2026-04-05 00:18:39.208421 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 00:18:39.209345 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 00:18:39.209368 | orchestrator | ++ INTERACTIVE=false 2026-04-05 00:18:39.209383 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 00:18:39.209397 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 00:18:39.209414 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 00:18:39.209427 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 00:18:39.209472 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 00:18:39.209562 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 00:18:39.209576 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 00:18:39.209588 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 00:18:39.209599 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 00:18:39.209637 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 00:18:39.209649 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 00:18:39.209659 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-05 00:18:39.209681 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-05 00:18:39.209693 | orchestrator | ++ export ARA=false 2026-04-05 00:18:39.209703 | orchestrator | ++ ARA=false 2026-04-05 00:18:39.209715 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 00:18:39.209725 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 00:18:39.209736 | orchestrator | ++ export TEMPEST=true 2026-04-05 00:18:39.209746 | orchestrator | ++ TEMPEST=true 2026-04-05 00:18:39.209757 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 00:18:39.209768 | orchestrator | ++ IS_ZUUL=true 2026-04-05 00:18:39.209779 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 00:18:39.209790 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 00:18:39.209800 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 00:18:39.209811 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 00:18:39.209821 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 00:18:39.209832 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 00:18:39.209843 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 00:18:39.209854 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 00:18:39.209864 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 00:18:39.209875 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 00:18:39.209891 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-05 00:18:39.268360 | orchestrator | + docker version 2026-04-05 00:18:39.378558 | orchestrator | Client: Docker Engine - Community 2026-04-05 00:18:39.378712 | orchestrator | Version: 27.5.1 2026-04-05 00:18:39.378729 | orchestrator | API version: 1.47 2026-04-05 00:18:39.378743 | orchestrator | Go version: go1.22.11 2026-04-05 00:18:39.378754 | orchestrator | Git commit: 9f9e405 2026-04-05 00:18:39.378765 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-05 00:18:39.378777 | orchestrator | OS/Arch: linux/amd64 2026-04-05 00:18:39.378788 | orchestrator | Context: default 2026-04-05 00:18:39.378799 | orchestrator | 2026-04-05 00:18:39.378811 | orchestrator | Server: Docker Engine - Community 2026-04-05 00:18:39.378830 | orchestrator | Engine: 2026-04-05 00:18:39.378850 | orchestrator | Version: 27.5.1 2026-04-05 00:18:39.378869 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-05 00:18:39.378926 | orchestrator | Go version: go1.22.11 2026-04-05 00:18:39.378944 | orchestrator | Git commit: 4c9b3b0 2026-04-05 00:18:39.378960 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-05 00:18:39.378979 | orchestrator | OS/Arch: linux/amd64 2026-04-05 00:18:39.378996 | orchestrator | Experimental: false 2026-04-05 00:18:39.379014 | orchestrator | containerd: 2026-04-05 00:18:39.379032 | orchestrator | Version: v2.2.2 2026-04-05 00:18:39.379051 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-05 00:18:39.379069 | orchestrator | runc: 2026-04-05 00:18:39.379087 | orchestrator | Version: 1.3.4 2026-04-05 00:18:39.379105 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-05 00:18:39.379125 | orchestrator | docker-init: 2026-04-05 00:18:39.379139 | orchestrator | Version: 0.19.0 2026-04-05 00:18:39.379150 | orchestrator | GitCommit: de40ad0 2026-04-05 00:18:39.382221 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-05 00:18:39.393634 | orchestrator | + set -e 2026-04-05 00:18:39.393728 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 00:18:39.393754 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 00:18:39.393776 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 00:18:39.393788 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 00:18:39.393799 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 00:18:39.393811 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 00:18:39.393822 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 00:18:39.393833 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 00:18:39.393844 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 00:18:39.393855 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-05 00:18:39.393865 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-05 00:18:39.393876 | orchestrator | ++ export ARA=false 2026-04-05 00:18:39.393894 | orchestrator | ++ ARA=false 2026-04-05 00:18:39.393911 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 00:18:39.393928 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 00:18:39.393946 | orchestrator | ++ export TEMPEST=true 2026-04-05 00:18:39.393964 | orchestrator | ++ TEMPEST=true 2026-04-05 00:18:39.393983 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 00:18:39.394001 | orchestrator | ++ IS_ZUUL=true 2026-04-05 00:18:39.394016 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 00:18:39.394079 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 00:18:39.394090 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 00:18:39.394101 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 00:18:39.394112 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 00:18:39.394325 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 00:18:39.394344 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 00:18:39.394355 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 00:18:39.394366 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 00:18:39.394377 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 00:18:39.394403 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 00:18:39.394426 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 00:18:39.394436 | orchestrator | ++ INTERACTIVE=false 2026-04-05 00:18:39.394447 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 00:18:39.394464 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 00:18:39.394475 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 00:18:39.394485 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 00:18:39.394496 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-05 00:18:39.401243 | orchestrator | + set -e 2026-04-05 00:18:39.401315 | orchestrator | + VERSION=reef 2026-04-05 00:18:39.402720 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-05 00:18:39.409934 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-05 00:18:39.410065 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-05 00:18:39.415002 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-04-05 00:18:39.421410 | orchestrator | + set -e 2026-04-05 00:18:39.421484 | orchestrator | + VERSION=2025.1 2026-04-05 00:18:39.421737 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-05 00:18:39.424738 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-05 00:18:39.424783 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-04-05 00:18:39.429858 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-05 00:18:39.430881 | orchestrator | ++ semver latest 7.0.0 2026-04-05 00:18:39.501137 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 00:18:39.501217 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 00:18:39.501226 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-05 00:18:39.501869 | orchestrator | ++ semver latest 10.0.0-0 2026-04-05 00:18:39.557569 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 00:18:39.558054 | orchestrator | ++ semver 2025.1 2025.1 2026-04-05 00:18:39.643954 | orchestrator | + [[ 0 -ge 0 ]] 2026-04-05 00:18:39.644080 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-05 00:18:39.650768 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-05 00:18:39.656133 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-05 00:18:39.756953 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 00:18:39.758219 | orchestrator | + source /opt/venv/bin/activate 2026-04-05 00:18:39.759488 | orchestrator | ++ deactivate nondestructive 2026-04-05 00:18:39.759561 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:39.759575 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:39.759586 | orchestrator | ++ hash -r 2026-04-05 00:18:39.759635 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:39.759648 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-05 00:18:39.759669 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-05 00:18:39.759680 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-05 00:18:39.759692 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-05 00:18:39.759703 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-05 00:18:39.759714 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-05 00:18:39.759725 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-05 00:18:39.759737 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:18:39.759796 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:18:39.759810 | orchestrator | ++ export PATH 2026-04-05 00:18:39.759826 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:39.759921 | orchestrator | ++ '[' -z '' ']' 2026-04-05 00:18:39.759937 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-05 00:18:39.759964 | orchestrator | ++ PS1='(venv) ' 2026-04-05 00:18:39.759977 | orchestrator | ++ export PS1 2026-04-05 00:18:39.759988 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-05 00:18:39.759999 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-05 00:18:39.760009 | orchestrator | ++ hash -r 2026-04-05 00:18:39.760024 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-05 00:18:41.184870 | orchestrator | 2026-04-05 00:18:41.185015 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-05 00:18:41.185043 | orchestrator | 2026-04-05 00:18:41.185064 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 00:18:41.790552 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:41.790645 | orchestrator | 2026-04-05 00:18:41.790655 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-05 00:18:42.803937 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:42.804033 | orchestrator | 2026-04-05 00:18:42.804048 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-05 00:18:42.804057 | orchestrator | 2026-04-05 00:18:42.804066 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:18:45.583736 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:45.583866 | orchestrator | 2026-04-05 00:18:45.583892 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-05 00:18:45.641099 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:45.641228 | orchestrator | 2026-04-05 00:18:45.641253 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-05 00:18:46.137896 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:46.138009 | orchestrator | 2026-04-05 00:18:46.138095 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-05 00:18:46.185781 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:18:46.185894 | orchestrator | 2026-04-05 00:18:46.185908 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-05 00:18:46.554493 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:46.554667 | orchestrator | 2026-04-05 00:18:46.554687 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-05 00:18:46.928494 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:46.928598 | orchestrator | 2026-04-05 00:18:46.928654 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-05 00:18:47.057698 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:18:47.057796 | orchestrator | 2026-04-05 00:18:47.057812 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-05 00:18:47.057827 | orchestrator | 2026-04-05 00:18:47.057847 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:18:49.009972 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:49.010156 | orchestrator | 2026-04-05 00:18:49.010175 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-05 00:18:49.136124 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-05 00:18:49.136223 | orchestrator | 2026-04-05 00:18:49.136239 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-05 00:18:49.194992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-05 00:18:49.195093 | orchestrator | 2026-04-05 00:18:49.195108 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-05 00:18:50.364089 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-05 00:18:50.364192 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-05 00:18:50.364208 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-05 00:18:50.364220 | orchestrator | 2026-04-05 00:18:50.364235 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-05 00:18:52.262256 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-05 00:18:52.262390 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-05 00:18:52.262409 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-05 00:18:52.262421 | orchestrator | 2026-04-05 00:18:52.262435 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-05 00:18:52.922634 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:18:52.922720 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:52.922731 | orchestrator | 2026-04-05 00:18:52.922740 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-05 00:18:53.637743 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:18:53.637870 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:53.637897 | orchestrator | 2026-04-05 00:18:53.637918 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-05 00:18:53.698310 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:18:53.698439 | orchestrator | 2026-04-05 00:18:53.698465 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-05 00:18:54.065733 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:54.065857 | orchestrator | 2026-04-05 00:18:54.065885 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-05 00:18:54.145195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-05 00:18:54.145292 | orchestrator | 2026-04-05 00:18:54.145331 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-05 00:18:55.381203 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:55.381292 | orchestrator | 2026-04-05 00:18:55.381304 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-05 00:18:56.278705 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:56.278810 | orchestrator | 2026-04-05 00:18:56.278828 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-05 00:19:07.347938 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:07.348047 | orchestrator | 2026-04-05 00:19:07.348068 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-05 00:19:07.410889 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:19:07.410976 | orchestrator | 2026-04-05 00:19:07.410988 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-05 00:19:07.411041 | orchestrator | 2026-04-05 00:19:07.411059 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:19:09.346864 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:09.346958 | orchestrator | 2026-04-05 00:19:09.346968 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-05 00:19:09.455926 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-05 00:19:09.456006 | orchestrator | 2026-04-05 00:19:09.456014 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-05 00:19:09.511068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:19:09.511146 | orchestrator | 2026-04-05 00:19:09.511159 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-05 00:19:12.179172 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:12.179272 | orchestrator | 2026-04-05 00:19:12.179291 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-05 00:19:12.223979 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:12.224074 | orchestrator | 2026-04-05 00:19:12.224090 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-05 00:19:12.368557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-05 00:19:12.368711 | orchestrator | 2026-04-05 00:19:12.368726 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-05 00:19:15.563661 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-05 00:19:15.563756 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-05 00:19:15.563768 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-05 00:19:15.563776 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-05 00:19:15.563782 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-05 00:19:15.563789 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-05 00:19:15.563796 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-05 00:19:15.563802 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-05 00:19:15.563809 | orchestrator | 2026-04-05 00:19:15.563816 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-05 00:19:16.245861 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:16.245994 | orchestrator | 2026-04-05 00:19:16.246082 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-05 00:19:16.897493 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:16.897659 | orchestrator | 2026-04-05 00:19:16.897680 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-05 00:19:16.988690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-05 00:19:16.988782 | orchestrator | 2026-04-05 00:19:16.988799 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-05 00:19:18.304033 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-05 00:19:18.304135 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-05 00:19:18.304152 | orchestrator | 2026-04-05 00:19:18.304165 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-05 00:19:19.032470 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:19.032635 | orchestrator | 2026-04-05 00:19:19.032657 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-05 00:19:19.083097 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:19:19.083202 | orchestrator | 2026-04-05 00:19:19.083217 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-05 00:19:19.177206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-05 00:19:19.177339 | orchestrator | 2026-04-05 00:19:19.177362 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-05 00:19:19.959407 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:19.959538 | orchestrator | 2026-04-05 00:19:19.959554 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-05 00:19:20.015738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-05 00:19:20.015810 | orchestrator | 2026-04-05 00:19:20.015818 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-05 00:19:21.426165 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:19:21.426275 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:19:21.426292 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:21.426305 | orchestrator | 2026-04-05 00:19:21.426318 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-05 00:19:22.103316 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:22.103404 | orchestrator | 2026-04-05 00:19:22.103414 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-05 00:19:22.181719 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:19:22.181804 | orchestrator | 2026-04-05 00:19:22.181815 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-05 00:19:22.287110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-05 00:19:22.287177 | orchestrator | 2026-04-05 00:19:22.287184 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-05 00:19:22.852892 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:22.852997 | orchestrator | 2026-04-05 00:19:22.853012 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-05 00:19:23.271665 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:23.271792 | orchestrator | 2026-04-05 00:19:23.271809 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-05 00:19:24.644786 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-05 00:19:24.644862 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-05 00:19:24.644871 | orchestrator | 2026-04-05 00:19:24.644879 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-05 00:19:25.315257 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:25.315335 | orchestrator | 2026-04-05 00:19:25.315344 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-05 00:19:25.715309 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:25.715434 | orchestrator | 2026-04-05 00:19:25.715453 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-05 00:19:26.093164 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:26.093238 | orchestrator | 2026-04-05 00:19:26.093246 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-05 00:19:26.143948 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:19:26.144051 | orchestrator | 2026-04-05 00:19:26.144068 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-05 00:19:26.218376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-05 00:19:26.218462 | orchestrator | 2026-04-05 00:19:26.218474 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-05 00:19:26.270744 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:26.270863 | orchestrator | 2026-04-05 00:19:26.270882 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-05 00:19:28.395130 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-05 00:19:28.395223 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-05 00:19:28.395233 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-05 00:19:28.395239 | orchestrator | 2026-04-05 00:19:28.395246 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-05 00:19:29.155244 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:29.155376 | orchestrator | 2026-04-05 00:19:29.155407 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-05 00:19:29.906690 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:29.906841 | orchestrator | 2026-04-05 00:19:29.906858 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-05 00:19:30.706728 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:30.706806 | orchestrator | 2026-04-05 00:19:30.706814 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-05 00:19:30.779875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-05 00:19:30.779964 | orchestrator | 2026-04-05 00:19:30.779975 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-05 00:19:30.831929 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:30.831995 | orchestrator | 2026-04-05 00:19:30.832002 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-05 00:19:31.600421 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-05 00:19:31.600527 | orchestrator | 2026-04-05 00:19:31.600543 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-05 00:19:31.697774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-05 00:19:31.697898 | orchestrator | 2026-04-05 00:19:31.697925 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-05 00:19:32.494891 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:32.494993 | orchestrator | 2026-04-05 00:19:32.495010 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-05 00:19:33.180458 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:33.180555 | orchestrator | 2026-04-05 00:19:33.180569 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-05 00:19:33.231978 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:19:33.232073 | orchestrator | 2026-04-05 00:19:33.232089 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-05 00:19:33.290191 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:33.290280 | orchestrator | 2026-04-05 00:19:33.290292 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-05 00:19:34.255476 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:34.255651 | orchestrator | 2026-04-05 00:19:34.255682 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-05 00:20:54.032024 | orchestrator | changed: [testbed-manager] 2026-04-05 00:20:54.032141 | orchestrator | 2026-04-05 00:20:54.032160 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-05 00:20:55.094257 | orchestrator | ok: [testbed-manager] 2026-04-05 00:20:55.094366 | orchestrator | 2026-04-05 00:20:55.094378 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-05 00:20:55.156294 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:20:55.156392 | orchestrator | 2026-04-05 00:20:55.156423 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-05 00:21:01.388929 | orchestrator | changed: [testbed-manager] 2026-04-05 00:21:01.389051 | orchestrator | 2026-04-05 00:21:01.389069 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-05 00:21:01.494356 | orchestrator | ok: [testbed-manager] 2026-04-05 00:21:01.494427 | orchestrator | 2026-04-05 00:21:01.494434 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-05 00:21:01.494439 | orchestrator | 2026-04-05 00:21:01.494444 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-05 00:21:01.541518 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:21:01.541653 | orchestrator | 2026-04-05 00:21:01.541661 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-05 00:22:01.604655 | orchestrator | Pausing for 60 seconds 2026-04-05 00:22:01.604713 | orchestrator | changed: [testbed-manager] 2026-04-05 00:22:01.604718 | orchestrator | 2026-04-05 00:22:01.604724 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-05 00:22:04.814178 | orchestrator | changed: [testbed-manager] 2026-04-05 00:22:04.814287 | orchestrator | 2026-04-05 00:22:04.814306 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-05 00:23:06.926367 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-05 00:23:06.926467 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-05 00:23:06.926480 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-05 00:23:06.926490 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:06.926522 | orchestrator | 2026-04-05 00:23:06.926533 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-05 00:23:12.998108 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:12.998255 | orchestrator | 2026-04-05 00:23:12.998272 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-05 00:23:13.082807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-05 00:23:13.082922 | orchestrator | 2026-04-05 00:23:13.082947 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-05 00:23:13.082967 | orchestrator | 2026-04-05 00:23:13.082984 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-05 00:23:13.150402 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:13.150569 | orchestrator | 2026-04-05 00:23:13.150595 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-05 00:23:13.235210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-05 00:23:13.235288 | orchestrator | 2026-04-05 00:23:13.235297 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-05 00:23:14.058312 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:14.058410 | orchestrator | 2026-04-05 00:23:14.058424 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-05 00:23:17.558706 | orchestrator | ok: [testbed-manager] 2026-04-05 00:23:17.558800 | orchestrator | 2026-04-05 00:23:17.558815 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-05 00:23:17.643394 | orchestrator | ok: [testbed-manager] => { 2026-04-05 00:23:17.643541 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-05 00:23:17.643563 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-05 00:23:17.643575 | orchestrator | "Checking running containers against expected versions...", 2026-04-05 00:23:17.643587 | orchestrator | "", 2026-04-05 00:23:17.643599 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-05 00:23:17.643610 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-05 00:23:17.643621 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.643632 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-05 00:23:17.643643 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.643654 | orchestrator | "", 2026-04-05 00:23:17.643665 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-05 00:23:17.643677 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-05 00:23:17.643688 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.643698 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-05 00:23:17.643709 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.643720 | orchestrator | "", 2026-04-05 00:23:17.643731 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-05 00:23:17.643742 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-05 00:23:17.643752 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.643763 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-05 00:23:17.643774 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.643785 | orchestrator | "", 2026-04-05 00:23:17.643796 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-05 00:23:17.643807 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-05 00:23:17.643818 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.643855 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-05 00:23:17.643867 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.643877 | orchestrator | "", 2026-04-05 00:23:17.643888 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-05 00:23:17.643899 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-04-05 00:23:17.643909 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.643920 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-04-05 00:23:17.643930 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.643943 | orchestrator | "", 2026-04-05 00:23:17.643956 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-05 00:23:17.643968 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.643980 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.643993 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.644006 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.644019 | orchestrator | "", 2026-04-05 00:23:17.644031 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-05 00:23:17.644043 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-05 00:23:17.644057 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.644070 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-05 00:23:17.644083 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.644095 | orchestrator | "", 2026-04-05 00:23:17.644108 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-05 00:23:17.644132 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-05 00:23:17.644145 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.644157 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-05 00:23:17.644169 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.644186 | orchestrator | "", 2026-04-05 00:23:17.644198 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-05 00:23:17.644229 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-05 00:23:17.644242 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.644254 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-05 00:23:17.644267 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.644279 | orchestrator | "", 2026-04-05 00:23:17.644292 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-05 00:23:17.644306 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-05 00:23:17.644319 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.644329 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-05 00:23:17.644340 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.644350 | orchestrator | "", 2026-04-05 00:23:17.644361 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-05 00:23:17.644372 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.644382 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.644393 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.644404 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.644414 | orchestrator | "", 2026-04-05 00:23:17.644425 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-05 00:23:17.644436 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.644447 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.644457 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.644468 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.644478 | orchestrator | "", 2026-04-05 00:23:17.644489 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-05 00:23:17.644526 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.644539 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.644550 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.644560 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.644580 | orchestrator | "", 2026-04-05 00:23:17.644590 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-05 00:23:17.644601 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.644612 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.644622 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.644633 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.644644 | orchestrator | "", 2026-04-05 00:23:17.644654 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-05 00:23:17.644684 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.644695 | orchestrator | " Enabled: true", 2026-04-05 00:23:17.644706 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:23:17.644717 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:23:17.644728 | orchestrator | "", 2026-04-05 00:23:17.644738 | orchestrator | "=== Summary ===", 2026-04-05 00:23:17.644749 | orchestrator | "Errors (version mismatches): 0", 2026-04-05 00:23:17.644759 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-05 00:23:17.644770 | orchestrator | "", 2026-04-05 00:23:17.644781 | orchestrator | "✅ All running containers match expected versions!" 2026-04-05 00:23:17.644792 | orchestrator | ] 2026-04-05 00:23:17.644803 | orchestrator | } 2026-04-05 00:23:17.644814 | orchestrator | 2026-04-05 00:23:17.644825 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-05 00:23:17.708064 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:17.708182 | orchestrator | 2026-04-05 00:23:17.708198 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:23:17.708215 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-05 00:23:17.708225 | orchestrator | 2026-04-05 00:23:17.810733 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 00:23:17.810846 | orchestrator | + deactivate 2026-04-05 00:23:17.810868 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-05 00:23:17.810887 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:23:17.810903 | orchestrator | + export PATH 2026-04-05 00:23:17.810920 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-05 00:23:17.810937 | orchestrator | + '[' -n '' ']' 2026-04-05 00:23:17.810954 | orchestrator | + hash -r 2026-04-05 00:23:17.810971 | orchestrator | + '[' -n '' ']' 2026-04-05 00:23:17.810986 | orchestrator | + unset VIRTUAL_ENV 2026-04-05 00:23:17.811001 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-05 00:23:17.811017 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-05 00:23:17.811035 | orchestrator | + unset -f deactivate 2026-04-05 00:23:17.811052 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-05 00:23:17.819818 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 00:23:17.819878 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-05 00:23:17.819888 | orchestrator | + local max_attempts=60 2026-04-05 00:23:17.819897 | orchestrator | + local name=ceph-ansible 2026-04-05 00:23:17.819905 | orchestrator | + local attempt_num=1 2026-04-05 00:23:17.820266 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:23:17.853262 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:23:17.853331 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-05 00:23:17.853344 | orchestrator | + local max_attempts=60 2026-04-05 00:23:17.853357 | orchestrator | + local name=kolla-ansible 2026-04-05 00:23:17.853367 | orchestrator | + local attempt_num=1 2026-04-05 00:23:17.854130 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-05 00:23:17.893553 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:23:17.893618 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-05 00:23:17.893626 | orchestrator | + local max_attempts=60 2026-04-05 00:23:17.893633 | orchestrator | + local name=osism-ansible 2026-04-05 00:23:17.893639 | orchestrator | + local attempt_num=1 2026-04-05 00:23:17.894365 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-05 00:23:17.934339 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:23:17.934427 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 00:23:17.934467 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-05 00:23:18.634760 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-05 00:23:18.803465 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-05 00:23:18.803636 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 00:23:18.803659 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 00:23:18.803671 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-05 00:23:18.803685 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-04-05 00:23:18.803696 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:23:18.803707 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:23:18.803742 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-05 00:23:18.803754 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:23:18.803765 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-05 00:23:18.803776 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:23:18.803787 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-05 00:23:18.803798 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 00:23:18.803809 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-05 00:23:18.803820 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-05 00:23:18.803831 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:23:18.808707 | orchestrator | ++ semver latest 7.0.0 2026-04-05 00:23:18.859312 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 00:23:18.859426 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 00:23:18.859448 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-05 00:23:18.862939 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-05 00:23:31.493776 | orchestrator | 2026-04-05 00:23:31 | INFO  | Prepare task for execution of resolvconf. 2026-04-05 00:23:31.730912 | orchestrator | 2026-04-05 00:23:31 | INFO  | Task 43da7945-e61f-43a2-86b5-34387a07db85 (resolvconf) was prepared for execution. 2026-04-05 00:23:31.731030 | orchestrator | 2026-04-05 00:23:31 | INFO  | It takes a moment until task 43da7945-e61f-43a2-86b5-34387a07db85 (resolvconf) has been started and output is visible here. 2026-04-05 00:23:46.713363 | orchestrator | 2026-04-05 00:23:46.713472 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-05 00:23:46.713545 | orchestrator | 2026-04-05 00:23:46.713560 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:23:46.713572 | orchestrator | Sunday 05 April 2026 00:23:35 +0000 (0:00:00.179) 0:00:00.179 ********** 2026-04-05 00:23:46.713584 | orchestrator | ok: [testbed-manager] 2026-04-05 00:23:46.713597 | orchestrator | 2026-04-05 00:23:46.713608 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-05 00:23:46.713620 | orchestrator | Sunday 05 April 2026 00:23:40 +0000 (0:00:04.915) 0:00:05.095 ********** 2026-04-05 00:23:46.713633 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:46.713654 | orchestrator | 2026-04-05 00:23:46.713674 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-05 00:23:46.713693 | orchestrator | Sunday 05 April 2026 00:23:40 +0000 (0:00:00.078) 0:00:05.173 ********** 2026-04-05 00:23:46.713713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-05 00:23:46.713734 | orchestrator | 2026-04-05 00:23:46.713754 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-05 00:23:46.713791 | orchestrator | Sunday 05 April 2026 00:23:40 +0000 (0:00:00.086) 0:00:05.260 ********** 2026-04-05 00:23:46.713807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:23:46.713818 | orchestrator | 2026-04-05 00:23:46.713829 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-05 00:23:46.713841 | orchestrator | Sunday 05 April 2026 00:23:40 +0000 (0:00:00.085) 0:00:05.345 ********** 2026-04-05 00:23:46.713852 | orchestrator | ok: [testbed-manager] 2026-04-05 00:23:46.713863 | orchestrator | 2026-04-05 00:23:46.713874 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-05 00:23:46.713885 | orchestrator | Sunday 05 April 2026 00:23:41 +0000 (0:00:01.223) 0:00:06.569 ********** 2026-04-05 00:23:46.713897 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:46.713910 | orchestrator | 2026-04-05 00:23:46.713922 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-05 00:23:46.713934 | orchestrator | Sunday 05 April 2026 00:23:41 +0000 (0:00:00.074) 0:00:06.644 ********** 2026-04-05 00:23:46.713947 | orchestrator | ok: [testbed-manager] 2026-04-05 00:23:46.713959 | orchestrator | 2026-04-05 00:23:46.713971 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-05 00:23:46.713984 | orchestrator | Sunday 05 April 2026 00:23:42 +0000 (0:00:00.604) 0:00:07.249 ********** 2026-04-05 00:23:46.713996 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:46.714009 | orchestrator | 2026-04-05 00:23:46.714114 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-05 00:23:46.714129 | orchestrator | Sunday 05 April 2026 00:23:42 +0000 (0:00:00.086) 0:00:07.335 ********** 2026-04-05 00:23:46.714142 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:46.714155 | orchestrator | 2026-04-05 00:23:46.714167 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-05 00:23:46.714180 | orchestrator | Sunday 05 April 2026 00:23:42 +0000 (0:00:00.613) 0:00:07.949 ********** 2026-04-05 00:23:46.714192 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:46.714204 | orchestrator | 2026-04-05 00:23:46.714242 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-05 00:23:46.714255 | orchestrator | Sunday 05 April 2026 00:23:44 +0000 (0:00:01.181) 0:00:09.130 ********** 2026-04-05 00:23:46.714268 | orchestrator | ok: [testbed-manager] 2026-04-05 00:23:46.714279 | orchestrator | 2026-04-05 00:23:46.714290 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-05 00:23:46.714300 | orchestrator | Sunday 05 April 2026 00:23:45 +0000 (0:00:01.037) 0:00:10.168 ********** 2026-04-05 00:23:46.714311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-05 00:23:46.714322 | orchestrator | 2026-04-05 00:23:46.714333 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-05 00:23:46.714343 | orchestrator | Sunday 05 April 2026 00:23:45 +0000 (0:00:00.087) 0:00:10.255 ********** 2026-04-05 00:23:46.714354 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:46.714365 | orchestrator | 2026-04-05 00:23:46.714376 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:23:46.714388 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 00:23:46.714398 | orchestrator | 2026-04-05 00:23:46.714409 | orchestrator | 2026-04-05 00:23:46.714420 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:23:46.714438 | orchestrator | Sunday 05 April 2026 00:23:46 +0000 (0:00:01.257) 0:00:11.512 ********** 2026-04-05 00:23:46.714456 | orchestrator | =============================================================================== 2026-04-05 00:23:46.714475 | orchestrator | Gathering Facts --------------------------------------------------------- 4.92s 2026-04-05 00:23:46.714516 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.26s 2026-04-05 00:23:46.714534 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.22s 2026-04-05 00:23:46.714551 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.18s 2026-04-05 00:23:46.714569 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.04s 2026-04-05 00:23:46.714585 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.61s 2026-04-05 00:23:46.714630 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.60s 2026-04-05 00:23:46.714651 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-04-05 00:23:46.714670 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-04-05 00:23:46.714684 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-04-05 00:23:46.714694 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-04-05 00:23:46.714719 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-04-05 00:23:46.714738 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-04-05 00:23:46.912073 | orchestrator | + osism apply sshconfig 2026-04-05 00:23:58.201354 | orchestrator | 2026-04-05 00:23:58 | INFO  | Prepare task for execution of sshconfig. 2026-04-05 00:23:58.282084 | orchestrator | 2026-04-05 00:23:58 | INFO  | Task 8bae60b8-7002-4006-8375-66ac96746214 (sshconfig) was prepared for execution. 2026-04-05 00:23:58.282180 | orchestrator | 2026-04-05 00:23:58 | INFO  | It takes a moment until task 8bae60b8-7002-4006-8375-66ac96746214 (sshconfig) has been started and output is visible here. 2026-04-05 00:24:10.089977 | orchestrator | 2026-04-05 00:24:10.090162 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-05 00:24:10.090180 | orchestrator | 2026-04-05 00:24:10.090192 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-05 00:24:10.090204 | orchestrator | Sunday 05 April 2026 00:24:01 +0000 (0:00:00.209) 0:00:00.209 ********** 2026-04-05 00:24:10.090248 | orchestrator | ok: [testbed-manager] 2026-04-05 00:24:10.090262 | orchestrator | 2026-04-05 00:24:10.090273 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-05 00:24:10.090284 | orchestrator | Sunday 05 April 2026 00:24:02 +0000 (0:00:00.990) 0:00:01.200 ********** 2026-04-05 00:24:10.090295 | orchestrator | changed: [testbed-manager] 2026-04-05 00:24:10.090306 | orchestrator | 2026-04-05 00:24:10.090317 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-05 00:24:10.090327 | orchestrator | Sunday 05 April 2026 00:24:03 +0000 (0:00:00.597) 0:00:01.798 ********** 2026-04-05 00:24:10.090338 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-05 00:24:10.090349 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-05 00:24:10.090360 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-05 00:24:10.090371 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-05 00:24:10.090382 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-05 00:24:10.090392 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-05 00:24:10.090403 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-05 00:24:10.090414 | orchestrator | 2026-04-05 00:24:10.090424 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-05 00:24:10.090435 | orchestrator | Sunday 05 April 2026 00:24:09 +0000 (0:00:06.046) 0:00:07.844 ********** 2026-04-05 00:24:10.090446 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:24:10.090456 | orchestrator | 2026-04-05 00:24:10.090467 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-05 00:24:10.090518 | orchestrator | Sunday 05 April 2026 00:24:09 +0000 (0:00:00.122) 0:00:07.967 ********** 2026-04-05 00:24:10.090533 | orchestrator | changed: [testbed-manager] 2026-04-05 00:24:10.090546 | orchestrator | 2026-04-05 00:24:10.090559 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:24:10.090573 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:24:10.090586 | orchestrator | 2026-04-05 00:24:10.090600 | orchestrator | 2026-04-05 00:24:10.090612 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:24:10.090625 | orchestrator | Sunday 05 April 2026 00:24:09 +0000 (0:00:00.587) 0:00:08.554 ********** 2026-04-05 00:24:10.090638 | orchestrator | =============================================================================== 2026-04-05 00:24:10.090651 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.05s 2026-04-05 00:24:10.090663 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.99s 2026-04-05 00:24:10.090676 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.60s 2026-04-05 00:24:10.090689 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2026-04-05 00:24:10.090703 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.12s 2026-04-05 00:24:10.286885 | orchestrator | + osism apply known-hosts 2026-04-05 00:24:21.635403 | orchestrator | 2026-04-05 00:24:21 | INFO  | Prepare task for execution of known-hosts. 2026-04-05 00:24:21.709221 | orchestrator | 2026-04-05 00:24:21 | INFO  | Task 95f4a63c-0f90-4f02-8a45-978068adc471 (known-hosts) was prepared for execution. 2026-04-05 00:24:21.709311 | orchestrator | 2026-04-05 00:24:21 | INFO  | It takes a moment until task 95f4a63c-0f90-4f02-8a45-978068adc471 (known-hosts) has been started and output is visible here. 2026-04-05 00:24:37.929075 | orchestrator | 2026-04-05 00:24:37.929197 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-05 00:24:37.929214 | orchestrator | 2026-04-05 00:24:37.929226 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-05 00:24:37.929257 | orchestrator | Sunday 05 April 2026 00:24:25 +0000 (0:00:00.195) 0:00:00.195 ********** 2026-04-05 00:24:37.929269 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-05 00:24:37.929280 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-05 00:24:37.929290 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-05 00:24:37.929299 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-05 00:24:37.929309 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-05 00:24:37.929318 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-05 00:24:37.929338 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-05 00:24:37.929348 | orchestrator | 2026-04-05 00:24:37.929359 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-05 00:24:37.929370 | orchestrator | Sunday 05 April 2026 00:24:31 +0000 (0:00:06.589) 0:00:06.784 ********** 2026-04-05 00:24:37.929381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-05 00:24:37.929393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-05 00:24:37.929402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-05 00:24:37.929412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-05 00:24:37.929421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-05 00:24:37.929431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-05 00:24:37.929440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-05 00:24:37.929450 | orchestrator | 2026-04-05 00:24:37.929459 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:37.929512 | orchestrator | Sunday 05 April 2026 00:24:31 +0000 (0:00:00.178) 0:00:06.962 ********** 2026-04-05 00:24:37.929524 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG7SE7HREcuDWPHeyQO1RlhuwbgOT4PtTWPKc0dGq+2H) 2026-04-05 00:24:37.929538 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC98h1xC7CWBbQd3VBBF8j372MX0MAK11tAc8/JDYpMyUG2QwEfv6d2a13pYgUjoRboE7P3RYFIC1DPGBUAe56WfnefPC2I6uiKrsgos5xs9br3R5PBZeIHNQ0JusBm6vXSE2KyPW47Ms9aQMumLWTY/tJyDmUY3uDexjy6jdVvftkIWtOLThldwhIVqH6QaC2EZvLCHaYt8zkNILLmoI0d3r8DAy15jnFzQLOFbhTthT98C719lLNnSenufzt43Pvk69uODvqJvUcB3shcjrCyugWEHYhF36bdLTH5lMLXAHeCuQzBbtNCcYyOyLysivNmLqKrGKWXwzu1ZAiWTaDmyziJKQyNaWbQ6MMdoEM2PjZqhPRX31s8kyonzJg9Za95sIxfJx4x02eaAf04xUV+cU9IlBDR1MZJ/xdRpSiE3M3rlDy6CibUjuWp2gAnjFXo0/fGGvf9L3Bzno4UaNqGSg5ys8rZY1xbeED3G/raOm2ub8/828YItKpkVcoS41s=) 2026-04-05 00:24:37.929553 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJwFKMM4L5zlhRXMQBqVrYKGWgR5pephkbXlv+BH8sRQvU5l060zGaUnTLW9DItrcVhu6yjo5HQELjAsN8bDOZk=) 2026-04-05 00:24:37.929572 | orchestrator | 2026-04-05 00:24:37.929588 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:37.929605 | orchestrator | Sunday 05 April 2026 00:24:33 +0000 (0:00:01.288) 0:00:08.251 ********** 2026-04-05 00:24:37.929632 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEAO7XjvIOakn4gK7xJBZElk5LXSNTbPGVqOhoEG5e8m) 2026-04-05 00:24:37.929690 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVNXtnmT/3E7hb84wH4ZGkI2WrAcKdcsxtUg+zr/aA22iMH00uhQu6o9giVw7P2QEx8o5P6L9UqomrRNJWK5yPGwxPMlbDeZ4XO7Eec+EcFQxnWjVg5ebitaBR6QySzNMum0tYGzSQdjX6WtL+NDbjNort/U7tHb0DFoNyFzHxBVdQQEyZZcA5NuCkOFIORUYj1ouP1wYwbRsST5oB1R2o9DXedsbyv9AmyuHZVScSNUNyycbf9p+6HLJPIHpeZ58xtoJjTXFY9b8g9zToNQHCbX/3uyRvr+LQVD8RWS/DDUILq2B/RRyApKLqHAlt0sZUJqKeUMMGdinOHod53W+6gQvdSm7pno2lsyjm2ScxYGy8aaTThrGbQJ8ZmMIesKNgYYEtWBQlaopRoxuhQHaAQadyDUmnDJmsm1XM5L5XhlfQCUWYVabCnAcecOxKpY/mKsIgdLwJ74RzcrqR/OfZfFwkU9pI2uYQliNHzCzLU2AcpDmYgX0tx680MBZYxUE=) 2026-04-05 00:24:37.929707 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEsZrAOuDNv/0VQzefRTOLlNyVhMPHeKR3CJz/GQj9D8Ds9Ez5LY4WkXOlWi18xPuI5YmDP76woBH4nKmhpAQKY=) 2026-04-05 00:24:37.929722 | orchestrator | 2026-04-05 00:24:37.929736 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:37.929750 | orchestrator | Sunday 05 April 2026 00:24:34 +0000 (0:00:01.151) 0:00:09.403 ********** 2026-04-05 00:24:37.929838 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOudnZidRTj5o9evPExjG/bIXtNUpt/GfvToT8dItUQAdeqN0JUk7tj/ruCN1IGvdfjhGtHUtqa/3ocaBHtQ+cLvnAT0k/6jZEwo9afjJhhOHtbRjfbG4MZXKfNCXHqRKf/1UiaYEFRLRGTQBOqslxHERx9ondciXHNCH4b1Dm+TbYdh9uWB51+jitg3etKoFYDkqCiidFupTGhxL1OkiR5YuHHBy3Za462KZurd/MTBdKQYJvI9GSdChbVxTKf9Qv8dw5H/ipNOnkgv7JYMzwmGek6+v599ncpmQDgsv91DUD8nBhVWuGBbiOqlHOcTnEhylNyKQ9Y25dD/oasq8lKaAUlA+Az6c9/lm7QIIrZ7RVz7efmQEgrZQkpazKmYHcdkjnwBnfidHdw2w1gzuxXR8DJeQdOSNjqlL5SghPLPZoLqge7gcwbk7h155Hr16UTpLWzw8c+QawbkWUBZFqiVUFoZ+SVcMQv7Z50tdeGtDZ8oExJu6KnHkPmKb136s=) 2026-04-05 00:24:37.929856 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM6sdQhhcAepuwCq17g5vjbLvWW8MEgf1R6UCFJmDlRDk1RNxXY9q7n6qVnerGyaZBVoEKxcyHSWRUbQZbWJGvA=) 2026-04-05 00:24:37.929871 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPpJ3izjQYJVoiIV4/D6DFLd/bpkdwpwBrBTbTn3J6jn) 2026-04-05 00:24:37.929885 | orchestrator | 2026-04-05 00:24:37.929899 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:37.929919 | orchestrator | Sunday 05 April 2026 00:24:35 +0000 (0:00:01.122) 0:00:10.526 ********** 2026-04-05 00:24:37.929932 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCHxpmuDr17J+Gu7oSvQDF1Vm4uen9A3kSAFr0nQ7G6fdUAFzxeuIfR88pg020wS7Mn5tfrk1FzvKlB/YYPN38Y=) 2026-04-05 00:24:37.929946 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiS9L1SPwvzLN95HOgpAw9p1eU1d9aFTa/RCjGhJ4tnbo/nmCaYuXNPntFKuWbDmdDSuDx3rUAwJcqhrQsCWLDyL5lFYupg6X60EiNRT2MEvaWYERk2/B59qG3SWiQL44Y5OKYemLDg8G5aVXxRrxRcyBOQ3qgwuIONMhc7f6Adqr9Ri2M2BHybyC66QC+ekzbOuW14Kuhdk37BSC9XzkmfhrrqHECKOS30V6RDtDI/wkcMQ3dhPLgXELbq+sFMhxCpkKM+NBEZ8KnYGb8tWck02qWtNX+/yie3tB0dD+5OOXQ8Zm9qAhz/OVK5DYJlsJ7e1cSObK/paElJO8JAO4qRwnfc3HCetIn3stEFrYxAtbEQ8N60GLKovWq3EEzah12F9wC0e6iQTCVEYaC4qE0KWijN9KcPzPJvNkc4mwlps2/Cuc+oANNkoDUvL4jRpPG4h1p6H4kKWXOLUKdAvfdUAU9/0PuDSDVmqLcKGQp37o8Gjs83q84qsTi5ZDJce0=) 2026-04-05 00:24:37.929959 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPyCsoEXvrdsjFCO1TKDShY8r+qqHv4Xq2JzWkMktl2) 2026-04-05 00:24:37.929971 | orchestrator | 2026-04-05 00:24:37.929984 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:37.929997 | orchestrator | Sunday 05 April 2026 00:24:36 +0000 (0:00:01.102) 0:00:11.629 ********** 2026-04-05 00:24:37.930011 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5cfDFg4RvwggK0yWpV8qx3B+NuYOj/OoJmFZ+aZtM/7WvIb6BKdEB30sLeijThNTTC8I8QrOlCg9bAZ4M0/AKLcE33zHaDjU09819Sxl26NHtbIw9FHw5EGs5C5RhCaUg/OJeHQ2WrmVya/M9G6el2b1/uaBpvSJS/BM6v8BeLB8OKTCAcplDcvEWsKL9VthbHV7kj6zkN5PlXRjX0FeUc3tBak/kFuMP6Swwcwjp2AqixJyD6HvKJVOV13qfY/ccslb5wzg00lj+CKJZoJVyYe3Zbrsy7GpUeTsCMKq0RIekVqGCx56qfWGRSuZ4bLMTXAMhQIB9tp67OysHmBE1TesRvPjYwY5TKFeRackU4kvAvo3/QggVberZ16Tlpat2y0zmKudvbAze0z4QnAZMhPsEokeNjKyfWeByoMjhhqP7GvgpAmkco8QkGmF6/ZDIcajTk/5dUsgMmMQ3kAIuJ4OyrQfbIaLg4Mc8Aj4NYr/X6fYT0Dp5lJNq0i8WwQU=) 2026-04-05 00:24:37.930089 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFqPf7j5PYoG3SKW8Nw+PL3ab5DLI8isEHau6X1tcfyN3so5s8vXP6ZeW/CLDh3Pnk7HtmZSHY7Tcy9Th6eNsXU=) 2026-04-05 00:24:37.930102 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy5Lj/oP4L7ZoqCrYHZ+XaHmrtl3RfqRdwyN+ZBTIKD) 2026-04-05 00:24:37.930116 | orchestrator | 2026-04-05 00:24:37.930131 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:37.930146 | orchestrator | Sunday 05 April 2026 00:24:37 +0000 (0:00:01.077) 0:00:12.706 ********** 2026-04-05 00:24:37.930173 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCznvfQVVNj8f/t8/iGb4TX4l8sYhRxgIWYasdHMzFOXEK7iefAAoqwxm8gZ72/DLNAD6Gk+HT9wBePPsFH8/6ewbiPOWPCMyV2u1lckM5sEJidDUVbqbvSupgQnS94zULVtSWjWmqdKeDzompI59RbvAbVl0rczIFhWB06viGbPafaaPzT8gCGsupdazgkAWxIi6lCk8t+U4DV7fVqnaXhy7948qORgSk5jTHYs5i6zn0XXVHoXN+e/NrDFK9BfeEzoy3poHCN6C+OCDNjsjFhzHDNDAQJ3umJ60vg0hoQftTIdCc+wj9L1WMs3TgMs3JGqCukB2G2iO7D0ihO5zGw3DFtpvay9T4aBk8YnLUQIdqztq/q4lISGm8onX0EnMz8QM8fuHed+R3iMMdSAudR2haPZSTIOJRQBjoCTVr3apgkunoO0WgLQdKDJ4YOWhMi8FrTjGSHUt0kQ6i7DUc3JUFEtxw7ul4ToexS/qqI62LKSrDuOmpmrXDODjwpduc=) 2026-04-05 00:24:49.793042 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINHjJ7uX60o1FwD3UQUGRhfiCECjsceLMHCGYo5jGW7O) 2026-04-05 00:24:49.793168 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPtVC6XbPsJtUAaerB1Fv/CdudKzoXylnzjBp+US9QXjfjGYHsnybnWwpbwio0jJs1LQsJ3CKYHYr+rcvL5mdeU=) 2026-04-05 00:24:49.793191 | orchestrator | 2026-04-05 00:24:49.793207 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:49.793224 | orchestrator | Sunday 05 April 2026 00:24:38 +0000 (0:00:01.055) 0:00:13.761 ********** 2026-04-05 00:24:49.793239 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLqc5/SVcq5LmY5bQsf5xYf9eZLeQHYPvFDEYkn4PIlT7GKhLpWsdnCqD6DDP2dQHLVH1yI5aURURdk6UqAfr3o=) 2026-04-05 00:24:49.793255 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGts+vuGpVrt6B8KysyVBcY1G7qbL7N9HDwBne7aPqA7) 2026-04-05 00:24:49.793267 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO2RO7MqDn0JMdfvUJfus4V5S4FGNlVuvj3yeQMZfvXThexNWDb/1S7B+Md7HY2mYFbuUqneSi84pr9rT1mBzQNJhw9mRgv1Xqgc2V/9izDvhF3gT5Wz0m3TKhRbWaxXRzeCYGMjVhwIKchTv0x6KhRi2MviHBKB+twZrSjcclPvojjdW8hE0zf6EyENC3+229Ho0VN5GUG+0ytflu2kIsFCL2uHAUG7SC1MQ5iJVvxLytijDTrq5gx6/E0rH1SsuHxfo5xjMMnp3tDXXBj1yOw5lag49uzUhMdDpsnFzT/4Yf/KIRDC8mpBHJi3nBXbVxOtMokk1hP1snhGBAv6LvLO9cRwtk2lMrDsbIMz7TxvIqUOqICmhMnJfIj5MRHVFek+jpQEbc2b/s/DY3jmsaB/5cV/1gGuZvHA3X1XVGbNlvznTwXUv6t5EDWd8RHjeCpXFHfeIQVUlqo8dVEvJ7jp4oCluNOmWVw76HkHObUbQ0k5pEdMZ/8/Gv6ZiSSRk=) 2026-04-05 00:24:49.793278 | orchestrator | 2026-04-05 00:24:49.793289 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-05 00:24:49.793322 | orchestrator | Sunday 05 April 2026 00:24:39 +0000 (0:00:01.064) 0:00:14.826 ********** 2026-04-05 00:24:49.793335 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-05 00:24:49.793349 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-05 00:24:49.793363 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-05 00:24:49.793400 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-05 00:24:49.793409 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-05 00:24:49.793417 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-05 00:24:49.793425 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-05 00:24:49.793433 | orchestrator | 2026-04-05 00:24:49.793441 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-05 00:24:49.793450 | orchestrator | Sunday 05 April 2026 00:24:45 +0000 (0:00:05.420) 0:00:20.246 ********** 2026-04-05 00:24:49.793459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-05 00:24:49.793490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-05 00:24:49.793499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-05 00:24:49.793507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-05 00:24:49.793515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-05 00:24:49.793525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-05 00:24:49.793538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-05 00:24:49.793552 | orchestrator | 2026-04-05 00:24:49.793567 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:49.793580 | orchestrator | Sunday 05 April 2026 00:24:45 +0000 (0:00:00.197) 0:00:20.444 ********** 2026-04-05 00:24:49.793592 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG7SE7HREcuDWPHeyQO1RlhuwbgOT4PtTWPKc0dGq+2H) 2026-04-05 00:24:49.793629 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC98h1xC7CWBbQd3VBBF8j372MX0MAK11tAc8/JDYpMyUG2QwEfv6d2a13pYgUjoRboE7P3RYFIC1DPGBUAe56WfnefPC2I6uiKrsgos5xs9br3R5PBZeIHNQ0JusBm6vXSE2KyPW47Ms9aQMumLWTY/tJyDmUY3uDexjy6jdVvftkIWtOLThldwhIVqH6QaC2EZvLCHaYt8zkNILLmoI0d3r8DAy15jnFzQLOFbhTthT98C719lLNnSenufzt43Pvk69uODvqJvUcB3shcjrCyugWEHYhF36bdLTH5lMLXAHeCuQzBbtNCcYyOyLysivNmLqKrGKWXwzu1ZAiWTaDmyziJKQyNaWbQ6MMdoEM2PjZqhPRX31s8kyonzJg9Za95sIxfJx4x02eaAf04xUV+cU9IlBDR1MZJ/xdRpSiE3M3rlDy6CibUjuWp2gAnjFXo0/fGGvf9L3Bzno4UaNqGSg5ys8rZY1xbeED3G/raOm2ub8/828YItKpkVcoS41s=) 2026-04-05 00:24:49.793647 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJwFKMM4L5zlhRXMQBqVrYKGWgR5pephkbXlv+BH8sRQvU5l060zGaUnTLW9DItrcVhu6yjo5HQELjAsN8bDOZk=) 2026-04-05 00:24:49.793661 | orchestrator | 2026-04-05 00:24:49.793676 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:49.793689 | orchestrator | Sunday 05 April 2026 00:24:46 +0000 (0:00:01.175) 0:00:21.620 ********** 2026-04-05 00:24:49.793703 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEAO7XjvIOakn4gK7xJBZElk5LXSNTbPGVqOhoEG5e8m) 2026-04-05 00:24:49.793719 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVNXtnmT/3E7hb84wH4ZGkI2WrAcKdcsxtUg+zr/aA22iMH00uhQu6o9giVw7P2QEx8o5P6L9UqomrRNJWK5yPGwxPMlbDeZ4XO7Eec+EcFQxnWjVg5ebitaBR6QySzNMum0tYGzSQdjX6WtL+NDbjNort/U7tHb0DFoNyFzHxBVdQQEyZZcA5NuCkOFIORUYj1ouP1wYwbRsST5oB1R2o9DXedsbyv9AmyuHZVScSNUNyycbf9p+6HLJPIHpeZ58xtoJjTXFY9b8g9zToNQHCbX/3uyRvr+LQVD8RWS/DDUILq2B/RRyApKLqHAlt0sZUJqKeUMMGdinOHod53W+6gQvdSm7pno2lsyjm2ScxYGy8aaTThrGbQJ8ZmMIesKNgYYEtWBQlaopRoxuhQHaAQadyDUmnDJmsm1XM5L5XhlfQCUWYVabCnAcecOxKpY/mKsIgdLwJ74RzcrqR/OfZfFwkU9pI2uYQliNHzCzLU2AcpDmYgX0tx680MBZYxUE=) 2026-04-05 00:24:49.793743 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEsZrAOuDNv/0VQzefRTOLlNyVhMPHeKR3CJz/GQj9D8Ds9Ez5LY4WkXOlWi18xPuI5YmDP76woBH4nKmhpAQKY=) 2026-04-05 00:24:49.793757 | orchestrator | 2026-04-05 00:24:49.793772 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:49.793786 | orchestrator | Sunday 05 April 2026 00:24:47 +0000 (0:00:01.146) 0:00:22.767 ********** 2026-04-05 00:24:49.793802 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOudnZidRTj5o9evPExjG/bIXtNUpt/GfvToT8dItUQAdeqN0JUk7tj/ruCN1IGvdfjhGtHUtqa/3ocaBHtQ+cLvnAT0k/6jZEwo9afjJhhOHtbRjfbG4MZXKfNCXHqRKf/1UiaYEFRLRGTQBOqslxHERx9ondciXHNCH4b1Dm+TbYdh9uWB51+jitg3etKoFYDkqCiidFupTGhxL1OkiR5YuHHBy3Za462KZurd/MTBdKQYJvI9GSdChbVxTKf9Qv8dw5H/ipNOnkgv7JYMzwmGek6+v599ncpmQDgsv91DUD8nBhVWuGBbiOqlHOcTnEhylNyKQ9Y25dD/oasq8lKaAUlA+Az6c9/lm7QIIrZ7RVz7efmQEgrZQkpazKmYHcdkjnwBnfidHdw2w1gzuxXR8DJeQdOSNjqlL5SghPLPZoLqge7gcwbk7h155Hr16UTpLWzw8c+QawbkWUBZFqiVUFoZ+SVcMQv7Z50tdeGtDZ8oExJu6KnHkPmKb136s=) 2026-04-05 00:24:49.793817 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM6sdQhhcAepuwCq17g5vjbLvWW8MEgf1R6UCFJmDlRDk1RNxXY9q7n6qVnerGyaZBVoEKxcyHSWRUbQZbWJGvA=) 2026-04-05 00:24:49.793831 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPpJ3izjQYJVoiIV4/D6DFLd/bpkdwpwBrBTbTn3J6jn) 2026-04-05 00:24:49.793845 | orchestrator | 2026-04-05 00:24:49.793855 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:49.793865 | orchestrator | Sunday 05 April 2026 00:24:48 +0000 (0:00:01.118) 0:00:23.885 ********** 2026-04-05 00:24:49.793880 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCHxpmuDr17J+Gu7oSvQDF1Vm4uen9A3kSAFr0nQ7G6fdUAFzxeuIfR88pg020wS7Mn5tfrk1FzvKlB/YYPN38Y=) 2026-04-05 00:24:49.793891 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiS9L1SPwvzLN95HOgpAw9p1eU1d9aFTa/RCjGhJ4tnbo/nmCaYuXNPntFKuWbDmdDSuDx3rUAwJcqhrQsCWLDyL5lFYupg6X60EiNRT2MEvaWYERk2/B59qG3SWiQL44Y5OKYemLDg8G5aVXxRrxRcyBOQ3qgwuIONMhc7f6Adqr9Ri2M2BHybyC66QC+ekzbOuW14Kuhdk37BSC9XzkmfhrrqHECKOS30V6RDtDI/wkcMQ3dhPLgXELbq+sFMhxCpkKM+NBEZ8KnYGb8tWck02qWtNX+/yie3tB0dD+5OOXQ8Zm9qAhz/OVK5DYJlsJ7e1cSObK/paElJO8JAO4qRwnfc3HCetIn3stEFrYxAtbEQ8N60GLKovWq3EEzah12F9wC0e6iQTCVEYaC4qE0KWijN9KcPzPJvNkc4mwlps2/Cuc+oANNkoDUvL4jRpPG4h1p6H4kKWXOLUKdAvfdUAU9/0PuDSDVmqLcKGQp37o8Gjs83q84qsTi5ZDJce0=) 2026-04-05 00:24:49.793918 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPyCsoEXvrdsjFCO1TKDShY8r+qqHv4Xq2JzWkMktl2) 2026-04-05 00:24:54.218876 | orchestrator | 2026-04-05 00:24:54.219031 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:54.219061 | orchestrator | Sunday 05 April 2026 00:24:49 +0000 (0:00:01.103) 0:00:24.989 ********** 2026-04-05 00:24:54.219085 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5cfDFg4RvwggK0yWpV8qx3B+NuYOj/OoJmFZ+aZtM/7WvIb6BKdEB30sLeijThNTTC8I8QrOlCg9bAZ4M0/AKLcE33zHaDjU09819Sxl26NHtbIw9FHw5EGs5C5RhCaUg/OJeHQ2WrmVya/M9G6el2b1/uaBpvSJS/BM6v8BeLB8OKTCAcplDcvEWsKL9VthbHV7kj6zkN5PlXRjX0FeUc3tBak/kFuMP6Swwcwjp2AqixJyD6HvKJVOV13qfY/ccslb5wzg00lj+CKJZoJVyYe3Zbrsy7GpUeTsCMKq0RIekVqGCx56qfWGRSuZ4bLMTXAMhQIB9tp67OysHmBE1TesRvPjYwY5TKFeRackU4kvAvo3/QggVberZ16Tlpat2y0zmKudvbAze0z4QnAZMhPsEokeNjKyfWeByoMjhhqP7GvgpAmkco8QkGmF6/ZDIcajTk/5dUsgMmMQ3kAIuJ4OyrQfbIaLg4Mc8Aj4NYr/X6fYT0Dp5lJNq0i8WwQU=) 2026-04-05 00:24:54.219141 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFqPf7j5PYoG3SKW8Nw+PL3ab5DLI8isEHau6X1tcfyN3so5s8vXP6ZeW/CLDh3Pnk7HtmZSHY7Tcy9Th6eNsXU=) 2026-04-05 00:24:54.219164 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy5Lj/oP4L7ZoqCrYHZ+XaHmrtl3RfqRdwyN+ZBTIKD) 2026-04-05 00:24:54.219183 | orchestrator | 2026-04-05 00:24:54.219223 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:54.219239 | orchestrator | Sunday 05 April 2026 00:24:50 +0000 (0:00:01.126) 0:00:26.116 ********** 2026-04-05 00:24:54.219251 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCznvfQVVNj8f/t8/iGb4TX4l8sYhRxgIWYasdHMzFOXEK7iefAAoqwxm8gZ72/DLNAD6Gk+HT9wBePPsFH8/6ewbiPOWPCMyV2u1lckM5sEJidDUVbqbvSupgQnS94zULVtSWjWmqdKeDzompI59RbvAbVl0rczIFhWB06viGbPafaaPzT8gCGsupdazgkAWxIi6lCk8t+U4DV7fVqnaXhy7948qORgSk5jTHYs5i6zn0XXVHoXN+e/NrDFK9BfeEzoy3poHCN6C+OCDNjsjFhzHDNDAQJ3umJ60vg0hoQftTIdCc+wj9L1WMs3TgMs3JGqCukB2G2iO7D0ihO5zGw3DFtpvay9T4aBk8YnLUQIdqztq/q4lISGm8onX0EnMz8QM8fuHed+R3iMMdSAudR2haPZSTIOJRQBjoCTVr3apgkunoO0WgLQdKDJ4YOWhMi8FrTjGSHUt0kQ6i7DUc3JUFEtxw7ul4ToexS/qqI62LKSrDuOmpmrXDODjwpduc=) 2026-04-05 00:24:54.219263 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPtVC6XbPsJtUAaerB1Fv/CdudKzoXylnzjBp+US9QXjfjGYHsnybnWwpbwio0jJs1LQsJ3CKYHYr+rcvL5mdeU=) 2026-04-05 00:24:54.219274 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINHjJ7uX60o1FwD3UQUGRhfiCECjsceLMHCGYo5jGW7O) 2026-04-05 00:24:54.219285 | orchestrator | 2026-04-05 00:24:54.219296 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:54.219307 | orchestrator | Sunday 05 April 2026 00:24:52 +0000 (0:00:01.110) 0:00:27.226 ********** 2026-04-05 00:24:54.219317 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGts+vuGpVrt6B8KysyVBcY1G7qbL7N9HDwBne7aPqA7) 2026-04-05 00:24:54.219328 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO2RO7MqDn0JMdfvUJfus4V5S4FGNlVuvj3yeQMZfvXThexNWDb/1S7B+Md7HY2mYFbuUqneSi84pr9rT1mBzQNJhw9mRgv1Xqgc2V/9izDvhF3gT5Wz0m3TKhRbWaxXRzeCYGMjVhwIKchTv0x6KhRi2MviHBKB+twZrSjcclPvojjdW8hE0zf6EyENC3+229Ho0VN5GUG+0ytflu2kIsFCL2uHAUG7SC1MQ5iJVvxLytijDTrq5gx6/E0rH1SsuHxfo5xjMMnp3tDXXBj1yOw5lag49uzUhMdDpsnFzT/4Yf/KIRDC8mpBHJi3nBXbVxOtMokk1hP1snhGBAv6LvLO9cRwtk2lMrDsbIMz7TxvIqUOqICmhMnJfIj5MRHVFek+jpQEbc2b/s/DY3jmsaB/5cV/1gGuZvHA3X1XVGbNlvznTwXUv6t5EDWd8RHjeCpXFHfeIQVUlqo8dVEvJ7jp4oCluNOmWVw76HkHObUbQ0k5pEdMZ/8/Gv6ZiSSRk=) 2026-04-05 00:24:54.219340 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLqc5/SVcq5LmY5bQsf5xYf9eZLeQHYPvFDEYkn4PIlT7GKhLpWsdnCqD6DDP2dQHLVH1yI5aURURdk6UqAfr3o=) 2026-04-05 00:24:54.219351 | orchestrator | 2026-04-05 00:24:54.219361 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-05 00:24:54.219372 | orchestrator | Sunday 05 April 2026 00:24:53 +0000 (0:00:01.097) 0:00:28.324 ********** 2026-04-05 00:24:54.219384 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-05 00:24:54.219395 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 00:24:54.219428 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-05 00:24:54.219450 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-05 00:24:54.219484 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-05 00:24:54.219497 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-05 00:24:54.219510 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-05 00:24:54.219533 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:24:54.219544 | orchestrator | 2026-04-05 00:24:54.219574 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-05 00:24:54.219586 | orchestrator | Sunday 05 April 2026 00:24:53 +0000 (0:00:00.186) 0:00:28.510 ********** 2026-04-05 00:24:54.219596 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:24:54.219607 | orchestrator | 2026-04-05 00:24:54.219618 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-05 00:24:54.219629 | orchestrator | Sunday 05 April 2026 00:24:53 +0000 (0:00:00.066) 0:00:28.577 ********** 2026-04-05 00:24:54.219640 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:24:54.219650 | orchestrator | 2026-04-05 00:24:54.219661 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-05 00:24:54.219672 | orchestrator | Sunday 05 April 2026 00:24:53 +0000 (0:00:00.063) 0:00:28.640 ********** 2026-04-05 00:24:54.219682 | orchestrator | changed: [testbed-manager] 2026-04-05 00:24:54.219694 | orchestrator | 2026-04-05 00:24:54.219704 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:24:54.219715 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 00:24:54.219728 | orchestrator | 2026-04-05 00:24:54.219738 | orchestrator | 2026-04-05 00:24:54.219749 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:24:54.219760 | orchestrator | Sunday 05 April 2026 00:24:53 +0000 (0:00:00.508) 0:00:29.149 ********** 2026-04-05 00:24:54.219770 | orchestrator | =============================================================================== 2026-04-05 00:24:54.219781 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.59s 2026-04-05 00:24:54.219792 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.42s 2026-04-05 00:24:54.219804 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.29s 2026-04-05 00:24:54.219815 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-04-05 00:24:54.219825 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-05 00:24:54.219836 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-05 00:24:54.219855 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-04-05 00:24:54.219866 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-05 00:24:54.219877 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-05 00:24:54.219887 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-05 00:24:54.219898 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-05 00:24:54.219909 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-05 00:24:54.219919 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-05 00:24:54.219930 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-04-05 00:24:54.219941 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-05 00:24:54.219951 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-05 00:24:54.219962 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.51s 2026-04-05 00:24:54.219973 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-04-05 00:24:54.219984 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-04-05 00:24:54.219995 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-04-05 00:24:54.415860 | orchestrator | + osism apply squid 2026-04-05 00:25:05.739886 | orchestrator | 2026-04-05 00:25:05 | INFO  | Prepare task for execution of squid. 2026-04-05 00:25:05.817290 | orchestrator | 2026-04-05 00:25:05 | INFO  | Task 057f0ed1-c837-4f50-8b01-3169cc831de0 (squid) was prepared for execution. 2026-04-05 00:25:05.817381 | orchestrator | 2026-04-05 00:25:05 | INFO  | It takes a moment until task 057f0ed1-c837-4f50-8b01-3169cc831de0 (squid) has been started and output is visible here. 2026-04-05 00:27:11.376695 | orchestrator | 2026-04-05 00:27:11.376806 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-05 00:27:11.376823 | orchestrator | 2026-04-05 00:27:11.376835 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-05 00:27:11.376847 | orchestrator | Sunday 05 April 2026 00:25:08 +0000 (0:00:00.181) 0:00:00.181 ********** 2026-04-05 00:27:11.376859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:27:11.376871 | orchestrator | 2026-04-05 00:27:11.376882 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-05 00:27:11.376893 | orchestrator | Sunday 05 April 2026 00:25:08 +0000 (0:00:00.079) 0:00:00.260 ********** 2026-04-05 00:27:11.376904 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:11.376916 | orchestrator | 2026-04-05 00:27:11.376927 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-05 00:27:11.376938 | orchestrator | Sunday 05 April 2026 00:25:11 +0000 (0:00:02.121) 0:00:02.381 ********** 2026-04-05 00:27:11.376949 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-05 00:27:11.376960 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-05 00:27:11.376971 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-05 00:27:11.376982 | orchestrator | 2026-04-05 00:27:11.376993 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-05 00:27:11.377004 | orchestrator | Sunday 05 April 2026 00:25:12 +0000 (0:00:01.253) 0:00:03.635 ********** 2026-04-05 00:27:11.377014 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-05 00:27:11.377026 | orchestrator | 2026-04-05 00:27:11.377036 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-05 00:27:11.377047 | orchestrator | Sunday 05 April 2026 00:25:13 +0000 (0:00:01.123) 0:00:04.759 ********** 2026-04-05 00:27:11.377058 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:11.377073 | orchestrator | 2026-04-05 00:27:11.377092 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-05 00:27:11.377120 | orchestrator | Sunday 05 April 2026 00:25:13 +0000 (0:00:00.354) 0:00:05.113 ********** 2026-04-05 00:27:11.377139 | orchestrator | changed: [testbed-manager] 2026-04-05 00:27:11.377157 | orchestrator | 2026-04-05 00:27:11.377174 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-05 00:27:11.377192 | orchestrator | Sunday 05 April 2026 00:25:14 +0000 (0:00:01.007) 0:00:06.120 ********** 2026-04-05 00:27:11.377209 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-05 00:27:11.377228 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:11.377244 | orchestrator | 2026-04-05 00:27:11.377263 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-05 00:27:11.377281 | orchestrator | Sunday 05 April 2026 00:25:58 +0000 (0:00:43.368) 0:00:49.489 ********** 2026-04-05 00:27:11.377300 | orchestrator | changed: [testbed-manager] 2026-04-05 00:27:11.377320 | orchestrator | 2026-04-05 00:27:11.377340 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-05 00:27:11.377388 | orchestrator | Sunday 05 April 2026 00:26:10 +0000 (0:00:12.158) 0:01:01.647 ********** 2026-04-05 00:27:11.377403 | orchestrator | Pausing for 60 seconds 2026-04-05 00:27:11.377414 | orchestrator | changed: [testbed-manager] 2026-04-05 00:27:11.377425 | orchestrator | 2026-04-05 00:27:11.377436 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-05 00:27:11.377480 | orchestrator | Sunday 05 April 2026 00:27:10 +0000 (0:01:00.085) 0:02:01.733 ********** 2026-04-05 00:27:11.377492 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:11.377503 | orchestrator | 2026-04-05 00:27:11.377514 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-05 00:27:11.377525 | orchestrator | Sunday 05 April 2026 00:27:10 +0000 (0:00:00.071) 0:02:01.804 ********** 2026-04-05 00:27:11.377536 | orchestrator | changed: [testbed-manager] 2026-04-05 00:27:11.377546 | orchestrator | 2026-04-05 00:27:11.377557 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:27:11.377568 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:27:11.377579 | orchestrator | 2026-04-05 00:27:11.377590 | orchestrator | 2026-04-05 00:27:11.377601 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:27:11.377611 | orchestrator | Sunday 05 April 2026 00:27:11 +0000 (0:00:00.690) 0:02:02.494 ********** 2026-04-05 00:27:11.377622 | orchestrator | =============================================================================== 2026-04-05 00:27:11.377633 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-04-05 00:27:11.377643 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 43.37s 2026-04-05 00:27:11.377654 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.16s 2026-04-05 00:27:11.377665 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.12s 2026-04-05 00:27:11.377675 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.25s 2026-04-05 00:27:11.377686 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2026-04-05 00:27:11.377696 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.01s 2026-04-05 00:27:11.377707 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.69s 2026-04-05 00:27:11.377718 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-04-05 00:27:11.377728 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-05 00:27:11.377739 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-04-05 00:27:11.586813 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 00:27:11.586881 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-05 00:27:11.594417 | orchestrator | + set -e 2026-04-05 00:27:11.594477 | orchestrator | + NAMESPACE=kolla 2026-04-05 00:27:11.594484 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-05 00:27:11.601683 | orchestrator | ++ semver latest 9.0.0 2026-04-05 00:27:11.673573 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-05 00:27:11.673691 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 00:27:11.674217 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-05 00:27:23.111444 | orchestrator | 2026-04-05 00:27:23 | INFO  | Prepare task for execution of operator. 2026-04-05 00:27:23.196544 | orchestrator | 2026-04-05 00:27:23 | INFO  | Task a93dfe76-9b4d-42dd-8227-e1c11621c26a (operator) was prepared for execution. 2026-04-05 00:27:23.196663 | orchestrator | 2026-04-05 00:27:23 | INFO  | It takes a moment until task a93dfe76-9b4d-42dd-8227-e1c11621c26a (operator) has been started and output is visible here. 2026-04-05 00:27:39.160891 | orchestrator | 2026-04-05 00:27:39.160996 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-05 00:27:39.161011 | orchestrator | 2026-04-05 00:27:39.161023 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:27:39.161035 | orchestrator | Sunday 05 April 2026 00:27:26 +0000 (0:00:00.212) 0:00:00.212 ********** 2026-04-05 00:27:39.161045 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:39.161056 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:39.161066 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:39.161100 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:39.161111 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:39.161120 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:39.161130 | orchestrator | 2026-04-05 00:27:39.161140 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-05 00:27:39.161149 | orchestrator | Sunday 05 April 2026 00:27:30 +0000 (0:00:04.227) 0:00:04.440 ********** 2026-04-05 00:27:39.161159 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:39.161168 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:39.161178 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:39.161188 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:39.161197 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:39.161206 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:39.161219 | orchestrator | 2026-04-05 00:27:39.161236 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-05 00:27:39.161247 | orchestrator | 2026-04-05 00:27:39.161275 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-05 00:27:39.161286 | orchestrator | Sunday 05 April 2026 00:27:31 +0000 (0:00:00.814) 0:00:05.254 ********** 2026-04-05 00:27:39.161296 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:39.161305 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:39.161314 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:39.161324 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:39.161379 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:39.161389 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:39.161398 | orchestrator | 2026-04-05 00:27:39.161408 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-05 00:27:39.161422 | orchestrator | Sunday 05 April 2026 00:27:31 +0000 (0:00:00.165) 0:00:05.419 ********** 2026-04-05 00:27:39.161434 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:39.161446 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:39.161458 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:39.161469 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:39.161481 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:39.161491 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:39.161502 | orchestrator | 2026-04-05 00:27:39.161514 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-05 00:27:39.161524 | orchestrator | Sunday 05 April 2026 00:27:31 +0000 (0:00:00.195) 0:00:05.614 ********** 2026-04-05 00:27:39.161535 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:39.161547 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:39.161558 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:39.161569 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:39.161579 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:39.161590 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:39.161601 | orchestrator | 2026-04-05 00:27:39.161612 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-05 00:27:39.161624 | orchestrator | Sunday 05 April 2026 00:27:32 +0000 (0:00:00.654) 0:00:06.269 ********** 2026-04-05 00:27:39.161635 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:39.161645 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:39.161655 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:39.161664 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:39.161674 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:39.161683 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:39.161693 | orchestrator | 2026-04-05 00:27:39.161702 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-05 00:27:39.161711 | orchestrator | Sunday 05 April 2026 00:27:33 +0000 (0:00:00.853) 0:00:07.123 ********** 2026-04-05 00:27:39.161721 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-05 00:27:39.161734 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-05 00:27:39.161749 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-05 00:27:39.161759 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-05 00:27:39.161769 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-05 00:27:39.161786 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-05 00:27:39.161796 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-05 00:27:39.161806 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-05 00:27:39.161815 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-05 00:27:39.161824 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-05 00:27:39.161834 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-05 00:27:39.161843 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-05 00:27:39.161853 | orchestrator | 2026-04-05 00:27:39.161863 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-05 00:27:39.161872 | orchestrator | Sunday 05 April 2026 00:27:34 +0000 (0:00:01.063) 0:00:08.186 ********** 2026-04-05 00:27:39.161882 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:39.161891 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:39.161901 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:39.161910 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:39.161919 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:39.161929 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:39.161938 | orchestrator | 2026-04-05 00:27:39.161948 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-05 00:27:39.161958 | orchestrator | Sunday 05 April 2026 00:27:35 +0000 (0:00:01.332) 0:00:09.519 ********** 2026-04-05 00:27:39.161968 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:39.161978 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:39.161987 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:39.161997 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:39.162007 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:39.162095 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:39.162109 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:39.162118 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:39.162128 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:39.162137 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:39.162147 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:39.162156 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:39.162166 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:39.162175 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:39.162185 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-05 00:27:39.162195 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-05 00:27:39.162204 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-05 00:27:39.162214 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:39.162223 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:39.162233 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:39.162242 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:39.162258 | orchestrator | 2026-04-05 00:27:39.162270 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-05 00:27:39.162281 | orchestrator | Sunday 05 April 2026 00:27:37 +0000 (0:00:01.199) 0:00:10.718 ********** 2026-04-05 00:27:39.162291 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:39.162300 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:39.162310 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:39.162343 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:39.162354 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:39.162363 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:39.162373 | orchestrator | 2026-04-05 00:27:39.162382 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-05 00:27:39.162392 | orchestrator | Sunday 05 April 2026 00:27:37 +0000 (0:00:00.163) 0:00:10.882 ********** 2026-04-05 00:27:39.162401 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:39.162411 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:39.162422 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:39.162438 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:39.162448 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:39.162458 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:39.162467 | orchestrator | 2026-04-05 00:27:39.162478 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-05 00:27:39.162494 | orchestrator | Sunday 05 April 2026 00:27:37 +0000 (0:00:00.190) 0:00:11.072 ********** 2026-04-05 00:27:39.162504 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:39.162513 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:39.162523 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:39.162532 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:39.162542 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:39.162551 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:39.162560 | orchestrator | 2026-04-05 00:27:39.162570 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-05 00:27:39.162580 | orchestrator | Sunday 05 April 2026 00:27:37 +0000 (0:00:00.599) 0:00:11.672 ********** 2026-04-05 00:27:39.162589 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:39.162599 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:39.162608 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:39.162618 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:39.162632 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:39.162646 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:39.162661 | orchestrator | 2026-04-05 00:27:39.162678 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-05 00:27:39.162694 | orchestrator | Sunday 05 April 2026 00:27:38 +0000 (0:00:00.173) 0:00:11.846 ********** 2026-04-05 00:27:39.162708 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 00:27:39.162724 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:39.162740 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 00:27:39.162756 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:39.162766 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 00:27:39.162775 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:39.162784 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 00:27:39.162794 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-05 00:27:39.162803 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:39.162813 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:39.162822 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-05 00:27:39.162831 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:39.162841 | orchestrator | 2026-04-05 00:27:39.162850 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-05 00:27:39.162860 | orchestrator | Sunday 05 April 2026 00:27:38 +0000 (0:00:00.709) 0:00:12.555 ********** 2026-04-05 00:27:39.162869 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:39.162878 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:39.162888 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:39.162897 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:39.162907 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:39.162916 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:39.162925 | orchestrator | 2026-04-05 00:27:39.162935 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-05 00:27:39.162944 | orchestrator | Sunday 05 April 2026 00:27:39 +0000 (0:00:00.174) 0:00:12.730 ********** 2026-04-05 00:27:39.162962 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:39.162972 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:39.162982 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:39.162991 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:39.163008 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:40.419408 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:40.419565 | orchestrator | 2026-04-05 00:27:40.419592 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-05 00:27:40.419616 | orchestrator | Sunday 05 April 2026 00:27:39 +0000 (0:00:00.166) 0:00:12.896 ********** 2026-04-05 00:27:40.419635 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:40.419656 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:40.419674 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:40.419693 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:40.419713 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:40.419733 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:40.419753 | orchestrator | 2026-04-05 00:27:40.419773 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-05 00:27:40.419793 | orchestrator | Sunday 05 April 2026 00:27:39 +0000 (0:00:00.154) 0:00:13.051 ********** 2026-04-05 00:27:40.419812 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:40.419832 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:40.419852 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:40.419872 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:40.419891 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:40.419914 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:40.419939 | orchestrator | 2026-04-05 00:27:40.419963 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-05 00:27:40.420015 | orchestrator | Sunday 05 April 2026 00:27:39 +0000 (0:00:00.646) 0:00:13.698 ********** 2026-04-05 00:27:40.420040 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:40.420059 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:40.420079 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:40.420098 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:40.420118 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:40.420137 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:40.420157 | orchestrator | 2026-04-05 00:27:40.420177 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:27:40.420206 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:40.420227 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:40.420247 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:40.420268 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:40.420288 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:40.420307 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:40.420351 | orchestrator | 2026-04-05 00:27:40.420373 | orchestrator | 2026-04-05 00:27:40.420391 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:27:40.420411 | orchestrator | Sunday 05 April 2026 00:27:40 +0000 (0:00:00.238) 0:00:13.936 ********** 2026-04-05 00:27:40.420430 | orchestrator | =============================================================================== 2026-04-05 00:27:40.420481 | orchestrator | Gathering Facts --------------------------------------------------------- 4.23s 2026-04-05 00:27:40.420500 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.33s 2026-04-05 00:27:40.420521 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.20s 2026-04-05 00:27:40.420542 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.06s 2026-04-05 00:27:40.420561 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.85s 2026-04-05 00:27:40.420581 | orchestrator | Do not require tty for all users ---------------------------------------- 0.81s 2026-04-05 00:27:40.420600 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-04-05 00:27:40.420620 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2026-04-05 00:27:40.420641 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2026-04-05 00:27:40.420660 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2026-04-05 00:27:40.420680 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2026-04-05 00:27:40.420700 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2026-04-05 00:27:40.420720 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-04-05 00:27:40.420739 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-04-05 00:27:40.420758 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2026-04-05 00:27:40.420778 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-04-05 00:27:40.420798 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-04-05 00:27:40.420818 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-04-05 00:27:40.420837 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-04-05 00:27:40.622090 | orchestrator | + osism apply --environment custom facts 2026-04-05 00:27:41.929489 | orchestrator | 2026-04-05 00:27:41 | INFO  | Trying to run play facts in environment custom 2026-04-05 00:27:51.997171 | orchestrator | 2026-04-05 00:27:51 | INFO  | Prepare task for execution of facts. 2026-04-05 00:27:52.076798 | orchestrator | 2026-04-05 00:27:52 | INFO  | Task c654ac47-6d11-4c5c-a0eb-a1018c1d7857 (facts) was prepared for execution. 2026-04-05 00:27:52.076893 | orchestrator | 2026-04-05 00:27:52 | INFO  | It takes a moment until task c654ac47-6d11-4c5c-a0eb-a1018c1d7857 (facts) has been started and output is visible here. 2026-04-05 00:28:34.161161 | orchestrator | 2026-04-05 00:28:34.161310 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-05 00:28:34.161330 | orchestrator | 2026-04-05 00:28:34.161343 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 00:28:34.161354 | orchestrator | Sunday 05 April 2026 00:27:55 +0000 (0:00:00.122) 0:00:00.122 ********** 2026-04-05 00:28:34.161365 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:34.161378 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:34.161389 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:34.161400 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:34.161411 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:34.161422 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:34.161433 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:34.161444 | orchestrator | 2026-04-05 00:28:34.161456 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-05 00:28:34.161467 | orchestrator | Sunday 05 April 2026 00:27:56 +0000 (0:00:01.491) 0:00:01.614 ********** 2026-04-05 00:28:34.161477 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:34.161489 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:34.161500 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:34.161540 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:34.161567 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:34.161578 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:34.161589 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:34.161600 | orchestrator | 2026-04-05 00:28:34.161611 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-05 00:28:34.161622 | orchestrator | 2026-04-05 00:28:34.161633 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 00:28:34.161644 | orchestrator | Sunday 05 April 2026 00:27:58 +0000 (0:00:01.256) 0:00:02.870 ********** 2026-04-05 00:28:34.161655 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:34.161666 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:34.161679 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:34.161691 | orchestrator | 2026-04-05 00:28:34.161704 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 00:28:34.161717 | orchestrator | Sunday 05 April 2026 00:27:58 +0000 (0:00:00.101) 0:00:02.972 ********** 2026-04-05 00:28:34.161730 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:34.161743 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:34.161755 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:34.161767 | orchestrator | 2026-04-05 00:28:34.161780 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 00:28:34.161793 | orchestrator | Sunday 05 April 2026 00:27:58 +0000 (0:00:00.205) 0:00:03.178 ********** 2026-04-05 00:28:34.161805 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:34.161818 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:34.161830 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:34.161842 | orchestrator | 2026-04-05 00:28:34.161854 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 00:28:34.161867 | orchestrator | Sunday 05 April 2026 00:27:58 +0000 (0:00:00.218) 0:00:03.396 ********** 2026-04-05 00:28:34.161881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:34.161910 | orchestrator | 2026-04-05 00:28:34.161923 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 00:28:34.161936 | orchestrator | Sunday 05 April 2026 00:27:58 +0000 (0:00:00.151) 0:00:03.548 ********** 2026-04-05 00:28:34.161949 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:34.161962 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:34.161975 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:34.161987 | orchestrator | 2026-04-05 00:28:34.162000 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 00:28:34.162011 | orchestrator | Sunday 05 April 2026 00:27:59 +0000 (0:00:00.395) 0:00:03.943 ********** 2026-04-05 00:28:34.162076 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:28:34.162087 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:28:34.162098 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:28:34.162109 | orchestrator | 2026-04-05 00:28:34.162120 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 00:28:34.162131 | orchestrator | Sunday 05 April 2026 00:27:59 +0000 (0:00:00.119) 0:00:04.063 ********** 2026-04-05 00:28:34.162142 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:34.162153 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:34.162164 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:34.162174 | orchestrator | 2026-04-05 00:28:34.162185 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 00:28:34.162196 | orchestrator | Sunday 05 April 2026 00:28:00 +0000 (0:00:00.972) 0:00:05.036 ********** 2026-04-05 00:28:34.162207 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:34.162218 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:34.162229 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:34.162247 | orchestrator | 2026-04-05 00:28:34.162351 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 00:28:34.162388 | orchestrator | Sunday 05 April 2026 00:28:00 +0000 (0:00:00.435) 0:00:05.471 ********** 2026-04-05 00:28:34.162406 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:34.162426 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:34.162446 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:34.162464 | orchestrator | 2026-04-05 00:28:34.162484 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 00:28:34.162495 | orchestrator | Sunday 05 April 2026 00:28:01 +0000 (0:00:01.015) 0:00:06.488 ********** 2026-04-05 00:28:34.162506 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:34.162517 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:34.162528 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:34.162538 | orchestrator | 2026-04-05 00:28:34.162549 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-05 00:28:34.162560 | orchestrator | Sunday 05 April 2026 00:28:17 +0000 (0:00:15.578) 0:00:22.066 ********** 2026-04-05 00:28:34.162571 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:28:34.162582 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:28:34.162607 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:28:34.162618 | orchestrator | 2026-04-05 00:28:34.162629 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-05 00:28:34.162662 | orchestrator | Sunday 05 April 2026 00:28:17 +0000 (0:00:00.083) 0:00:22.149 ********** 2026-04-05 00:28:34.162674 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:34.162684 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:34.162696 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:34.162706 | orchestrator | 2026-04-05 00:28:34.162717 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 00:28:34.162728 | orchestrator | Sunday 05 April 2026 00:28:24 +0000 (0:00:07.426) 0:00:29.576 ********** 2026-04-05 00:28:34.162739 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:34.162750 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:34.162760 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:34.162771 | orchestrator | 2026-04-05 00:28:34.162782 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-05 00:28:34.162793 | orchestrator | Sunday 05 April 2026 00:28:25 +0000 (0:00:00.417) 0:00:29.994 ********** 2026-04-05 00:28:34.162804 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-05 00:28:34.162827 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-05 00:28:34.162838 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-05 00:28:34.162848 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-05 00:28:34.162859 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-05 00:28:34.162870 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-05 00:28:34.162881 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-05 00:28:34.162892 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-05 00:28:34.162903 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-05 00:28:34.162913 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-05 00:28:34.162923 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-05 00:28:34.162972 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-05 00:28:34.162983 | orchestrator | 2026-04-05 00:28:34.162993 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 00:28:34.163002 | orchestrator | Sunday 05 April 2026 00:28:28 +0000 (0:00:03.504) 0:00:33.498 ********** 2026-04-05 00:28:34.163012 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:34.163021 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:34.163031 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:34.163040 | orchestrator | 2026-04-05 00:28:34.163050 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:28:34.163067 | orchestrator | 2026-04-05 00:28:34.163077 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:28:34.163087 | orchestrator | Sunday 05 April 2026 00:28:29 +0000 (0:00:01.260) 0:00:34.759 ********** 2026-04-05 00:28:34.163097 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:34.163106 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:34.163116 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:34.163125 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:34.163135 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:34.163144 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:34.163154 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:34.163163 | orchestrator | 2026-04-05 00:28:34.163172 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:28:34.163183 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:28:34.163193 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:28:34.163205 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:28:34.163214 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:28:34.163224 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:28:34.163244 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:28:34.163254 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:28:34.163296 | orchestrator | 2026-04-05 00:28:34.163306 | orchestrator | 2026-04-05 00:28:34.163316 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:28:34.163326 | orchestrator | Sunday 05 April 2026 00:28:34 +0000 (0:00:04.198) 0:00:38.958 ********** 2026-04-05 00:28:34.163336 | orchestrator | =============================================================================== 2026-04-05 00:28:34.163345 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.58s 2026-04-05 00:28:34.163355 | orchestrator | Install required packages (Debian) -------------------------------------- 7.43s 2026-04-05 00:28:34.163365 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.20s 2026-04-05 00:28:34.163374 | orchestrator | Copy fact files --------------------------------------------------------- 3.50s 2026-04-05 00:28:34.163384 | orchestrator | Create custom facts directory ------------------------------------------- 1.49s 2026-04-05 00:28:34.163393 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.26s 2026-04-05 00:28:34.163411 | orchestrator | Copy fact file ---------------------------------------------------------- 1.26s 2026-04-05 00:28:34.396588 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2026-04-05 00:28:34.396662 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.97s 2026-04-05 00:28:34.396668 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-04-05 00:28:34.396673 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2026-04-05 00:28:34.396677 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2026-04-05 00:28:34.396682 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-04-05 00:28:34.396687 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-04-05 00:28:34.396691 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-04-05 00:28:34.396716 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-04-05 00:28:34.396733 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-04-05 00:28:34.396737 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-04-05 00:28:34.640123 | orchestrator | + osism apply bootstrap 2026-04-05 00:28:46.142534 | orchestrator | 2026-04-05 00:28:46 | INFO  | Prepare task for execution of bootstrap. 2026-04-05 00:28:46.218490 | orchestrator | 2026-04-05 00:28:46 | INFO  | Task e5eef752-698e-4978-86e3-ef06f2da8f22 (bootstrap) was prepared for execution. 2026-04-05 00:28:46.218612 | orchestrator | 2026-04-05 00:28:46 | INFO  | It takes a moment until task e5eef752-698e-4978-86e3-ef06f2da8f22 (bootstrap) has been started and output is visible here. 2026-04-05 00:29:02.274444 | orchestrator | 2026-04-05 00:29:02.274544 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-05 00:29:02.274561 | orchestrator | 2026-04-05 00:29:02.274573 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-05 00:29:02.274582 | orchestrator | Sunday 05 April 2026 00:28:50 +0000 (0:00:00.217) 0:00:00.217 ********** 2026-04-05 00:29:02.274591 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:02.274601 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:02.274610 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:02.274618 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:02.274627 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:02.274636 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:02.274644 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:02.274653 | orchestrator | 2026-04-05 00:29:02.274662 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:29:02.274671 | orchestrator | 2026-04-05 00:29:02.274680 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:29:02.274689 | orchestrator | Sunday 05 April 2026 00:28:50 +0000 (0:00:00.388) 0:00:00.606 ********** 2026-04-05 00:29:02.274698 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:02.274707 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:02.274715 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:02.274724 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:02.274733 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:02.274742 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:02.274750 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:02.274759 | orchestrator | 2026-04-05 00:29:02.274768 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-05 00:29:02.274777 | orchestrator | 2026-04-05 00:29:02.274785 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:29:02.274794 | orchestrator | Sunday 05 April 2026 00:28:54 +0000 (0:00:04.549) 0:00:05.156 ********** 2026-04-05 00:29:02.274804 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-05 00:29:02.274813 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 00:29:02.274822 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-05 00:29:02.274831 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-05 00:29:02.274839 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-05 00:29:02.274848 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 00:29:02.274857 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-05 00:29:02.274866 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 00:29:02.274874 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-05 00:29:02.274883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 00:29:02.274892 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-05 00:29:02.274901 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-05 00:29:02.274927 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 00:29:02.274936 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-05 00:29:02.274945 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 00:29:02.274954 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 00:29:02.274962 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 00:29:02.274971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 00:29:02.274979 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:29:02.274988 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 00:29:02.274996 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:29:02.275005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-05 00:29:02.275013 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 00:29:02.275022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 00:29:02.275030 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 00:29:02.275039 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 00:29:02.275047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 00:29:02.275056 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 00:29:02.275064 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 00:29:02.275072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 00:29:02.275081 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-05 00:29:02.275090 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 00:29:02.275098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:29:02.275106 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 00:29:02.275115 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 00:29:02.275123 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:29:02.275132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:29:02.275140 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 00:29:02.275149 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 00:29:02.275158 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-05 00:29:02.275167 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:29:02.275175 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:29:02.275184 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:29:02.275192 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 00:29:02.275201 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 00:29:02.275209 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 00:29:02.275218 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 00:29:02.275263 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 00:29:02.275274 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 00:29:02.275282 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 00:29:02.275291 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 00:29:02.275300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 00:29:02.275308 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:29:02.275317 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 00:29:02.275326 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 00:29:02.275334 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:29:02.275343 | orchestrator | 2026-04-05 00:29:02.275352 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-05 00:29:02.275360 | orchestrator | 2026-04-05 00:29:02.275369 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-05 00:29:02.275385 | orchestrator | Sunday 05 April 2026 00:28:55 +0000 (0:00:00.501) 0:00:05.658 ********** 2026-04-05 00:29:02.275393 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:02.275402 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:02.275411 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:02.275419 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:02.275428 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:02.275436 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:02.275445 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:02.275454 | orchestrator | 2026-04-05 00:29:02.275463 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-05 00:29:02.275471 | orchestrator | Sunday 05 April 2026 00:28:56 +0000 (0:00:01.181) 0:00:06.839 ********** 2026-04-05 00:29:02.275480 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:02.275489 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:02.275498 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:02.275506 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:02.275515 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:02.275523 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:02.275532 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:02.275541 | orchestrator | 2026-04-05 00:29:02.275550 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-05 00:29:02.275558 | orchestrator | Sunday 05 April 2026 00:28:57 +0000 (0:00:01.262) 0:00:08.102 ********** 2026-04-05 00:29:02.275568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:02.275579 | orchestrator | 2026-04-05 00:29:02.275588 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-05 00:29:02.275597 | orchestrator | Sunday 05 April 2026 00:28:58 +0000 (0:00:00.332) 0:00:08.434 ********** 2026-04-05 00:29:02.275605 | orchestrator | changed: [testbed-manager] 2026-04-05 00:29:02.275614 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:02.275623 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:02.275631 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:02.275640 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:02.275648 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:02.275657 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:02.275665 | orchestrator | 2026-04-05 00:29:02.275683 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-05 00:29:02.275692 | orchestrator | Sunday 05 April 2026 00:28:59 +0000 (0:00:01.435) 0:00:09.870 ********** 2026-04-05 00:29:02.275701 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:29:02.275711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:02.275720 | orchestrator | 2026-04-05 00:29:02.275729 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-05 00:29:02.275738 | orchestrator | Sunday 05 April 2026 00:29:00 +0000 (0:00:00.315) 0:00:10.185 ********** 2026-04-05 00:29:02.275746 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:02.275755 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:02.275764 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:02.275772 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:02.275781 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:02.275789 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:02.275798 | orchestrator | 2026-04-05 00:29:02.275807 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-05 00:29:02.275815 | orchestrator | Sunday 05 April 2026 00:29:01 +0000 (0:00:01.009) 0:00:11.196 ********** 2026-04-05 00:29:02.275824 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:29:02.275833 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:02.275846 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:02.275855 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:02.275863 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:02.275877 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:02.275892 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:02.275905 | orchestrator | 2026-04-05 00:29:02.275914 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-05 00:29:02.275926 | orchestrator | Sunday 05 April 2026 00:29:01 +0000 (0:00:00.682) 0:00:11.878 ********** 2026-04-05 00:29:02.275935 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:29:02.275943 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:29:02.275952 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:29:02.275960 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:29:02.275969 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:29:02.275977 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:29:02.275986 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:02.275994 | orchestrator | 2026-04-05 00:29:02.276003 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-05 00:29:02.276013 | orchestrator | Sunday 05 April 2026 00:29:02 +0000 (0:00:00.452) 0:00:12.330 ********** 2026-04-05 00:29:02.276021 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:29:02.276030 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:29:02.276044 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:29:14.911404 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:29:14.911548 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:29:14.911576 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:29:14.911596 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:29:14.911616 | orchestrator | 2026-04-05 00:29:14.911636 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-05 00:29:14.911657 | orchestrator | Sunday 05 April 2026 00:29:02 +0000 (0:00:00.243) 0:00:12.573 ********** 2026-04-05 00:29:14.911679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:14.911717 | orchestrator | 2026-04-05 00:29:14.911738 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-05 00:29:14.911760 | orchestrator | Sunday 05 April 2026 00:29:02 +0000 (0:00:00.360) 0:00:12.934 ********** 2026-04-05 00:29:14.911785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:14.911808 | orchestrator | 2026-04-05 00:29:14.911830 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-05 00:29:14.911850 | orchestrator | Sunday 05 April 2026 00:29:03 +0000 (0:00:00.325) 0:00:13.259 ********** 2026-04-05 00:29:14.911871 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:14.911895 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.911917 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:14.911939 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:14.911962 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:14.911984 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:14.912006 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:14.912030 | orchestrator | 2026-04-05 00:29:14.912052 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-05 00:29:14.912075 | orchestrator | Sunday 05 April 2026 00:29:04 +0000 (0:00:01.278) 0:00:14.537 ********** 2026-04-05 00:29:14.912094 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:29:14.912115 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:29:14.912135 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:29:14.912155 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:29:14.912175 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:29:14.912316 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:29:14.912342 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:29:14.912362 | orchestrator | 2026-04-05 00:29:14.912382 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-05 00:29:14.912400 | orchestrator | Sunday 05 April 2026 00:29:04 +0000 (0:00:00.262) 0:00:14.800 ********** 2026-04-05 00:29:14.912418 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:14.912436 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:14.912455 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.912475 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:14.912495 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:14.912515 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:14.912535 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:14.912555 | orchestrator | 2026-04-05 00:29:14.912576 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-05 00:29:14.912596 | orchestrator | Sunday 05 April 2026 00:29:05 +0000 (0:00:00.624) 0:00:15.424 ********** 2026-04-05 00:29:14.912613 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:29:14.912630 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:29:14.912646 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:29:14.912662 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:29:14.912678 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:29:14.912694 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:29:14.912710 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:29:14.912727 | orchestrator | 2026-04-05 00:29:14.912743 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-05 00:29:14.912759 | orchestrator | Sunday 05 April 2026 00:29:05 +0000 (0:00:00.307) 0:00:15.732 ********** 2026-04-05 00:29:14.912775 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.912790 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:14.912807 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:14.912825 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:14.912841 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:14.912857 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:14.912873 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:14.912891 | orchestrator | 2026-04-05 00:29:14.912909 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-05 00:29:14.912928 | orchestrator | Sunday 05 April 2026 00:29:06 +0000 (0:00:00.691) 0:00:16.424 ********** 2026-04-05 00:29:14.912946 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.912965 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:14.912980 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:14.912996 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:14.913013 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:14.913029 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:14.913044 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:14.913062 | orchestrator | 2026-04-05 00:29:14.913094 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-05 00:29:14.913113 | orchestrator | Sunday 05 April 2026 00:29:07 +0000 (0:00:01.093) 0:00:17.517 ********** 2026-04-05 00:29:14.913132 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:14.913147 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.913165 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:14.913183 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:14.913200 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:14.913245 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:14.913265 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:14.913283 | orchestrator | 2026-04-05 00:29:14.913299 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-05 00:29:14.913318 | orchestrator | Sunday 05 April 2026 00:29:08 +0000 (0:00:01.147) 0:00:18.665 ********** 2026-04-05 00:29:14.913366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:14.913402 | orchestrator | 2026-04-05 00:29:14.913481 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-05 00:29:14.913501 | orchestrator | Sunday 05 April 2026 00:29:08 +0000 (0:00:00.344) 0:00:19.009 ********** 2026-04-05 00:29:14.913518 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:29:14.913535 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:14.913551 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:14.913567 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:14.913583 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:14.913600 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:14.913617 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:14.913633 | orchestrator | 2026-04-05 00:29:14.913651 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 00:29:14.913668 | orchestrator | Sunday 05 April 2026 00:29:10 +0000 (0:00:01.414) 0:00:20.424 ********** 2026-04-05 00:29:14.913684 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.913701 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:14.913718 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:14.913735 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:14.913752 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:14.913770 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:14.913788 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:14.913806 | orchestrator | 2026-04-05 00:29:14.913822 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 00:29:14.913839 | orchestrator | Sunday 05 April 2026 00:29:10 +0000 (0:00:00.257) 0:00:20.681 ********** 2026-04-05 00:29:14.913855 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.913870 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:14.913886 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:14.913901 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:14.913917 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:14.913933 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:14.913947 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:14.913963 | orchestrator | 2026-04-05 00:29:14.913980 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 00:29:14.913995 | orchestrator | Sunday 05 April 2026 00:29:10 +0000 (0:00:00.243) 0:00:20.925 ********** 2026-04-05 00:29:14.914010 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.914107 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:14.914126 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:14.914143 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:14.914160 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:14.914178 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:14.914324 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:14.914346 | orchestrator | 2026-04-05 00:29:14.914364 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 00:29:14.914380 | orchestrator | Sunday 05 April 2026 00:29:11 +0000 (0:00:00.266) 0:00:21.191 ********** 2026-04-05 00:29:14.914398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:14.914416 | orchestrator | 2026-04-05 00:29:14.914432 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 00:29:14.914449 | orchestrator | Sunday 05 April 2026 00:29:11 +0000 (0:00:00.332) 0:00:21.523 ********** 2026-04-05 00:29:14.914525 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.914548 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:14.914567 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:14.914583 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:14.914601 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:14.914619 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:14.914636 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:14.914652 | orchestrator | 2026-04-05 00:29:14.914686 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 00:29:14.914703 | orchestrator | Sunday 05 April 2026 00:29:11 +0000 (0:00:00.585) 0:00:22.109 ********** 2026-04-05 00:29:14.914719 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:29:14.914736 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:29:14.914752 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:29:14.914768 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:29:14.914783 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:29:14.914799 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:29:14.914816 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:29:14.914831 | orchestrator | 2026-04-05 00:29:14.914847 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 00:29:14.914863 | orchestrator | Sunday 05 April 2026 00:29:12 +0000 (0:00:00.266) 0:00:22.375 ********** 2026-04-05 00:29:14.914878 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.914893 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:14.914909 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:14.914926 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:14.914944 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:14.914961 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:14.914979 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:14.914997 | orchestrator | 2026-04-05 00:29:14.915015 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 00:29:14.915033 | orchestrator | Sunday 05 April 2026 00:29:13 +0000 (0:00:01.103) 0:00:23.479 ********** 2026-04-05 00:29:14.915050 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.915066 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:14.915082 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:14.915099 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:14.915115 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:14.915133 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:14.915150 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:14.915167 | orchestrator | 2026-04-05 00:29:14.915185 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 00:29:14.915202 | orchestrator | Sunday 05 April 2026 00:29:13 +0000 (0:00:00.559) 0:00:24.038 ********** 2026-04-05 00:29:14.915250 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:14.915333 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:14.915351 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:14.915369 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:14.915410 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.591819 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:58.591921 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:58.591933 | orchestrator | 2026-04-05 00:29:58.591943 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 00:29:58.591954 | orchestrator | Sunday 05 April 2026 00:29:14 +0000 (0:00:01.130) 0:00:25.168 ********** 2026-04-05 00:29:58.591963 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.591973 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:58.591981 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:58.591990 | orchestrator | changed: [testbed-manager] 2026-04-05 00:29:58.591999 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:58.592008 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:58.592017 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:58.592026 | orchestrator | 2026-04-05 00:29:58.592036 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-05 00:29:58.592045 | orchestrator | Sunday 05 April 2026 00:29:31 +0000 (0:00:16.900) 0:00:42.068 ********** 2026-04-05 00:29:58.592053 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:58.592062 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:58.592070 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:58.592079 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:58.592087 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.592096 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:58.592104 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:58.592137 | orchestrator | 2026-04-05 00:29:58.592147 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-05 00:29:58.592156 | orchestrator | Sunday 05 April 2026 00:29:32 +0000 (0:00:00.260) 0:00:42.329 ********** 2026-04-05 00:29:58.592165 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:58.592174 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:58.592182 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:58.592191 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:58.592200 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.592209 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:58.592217 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:58.592226 | orchestrator | 2026-04-05 00:29:58.592235 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-05 00:29:58.592243 | orchestrator | Sunday 05 April 2026 00:29:32 +0000 (0:00:00.228) 0:00:42.558 ********** 2026-04-05 00:29:58.592252 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:58.592261 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:58.592269 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:58.592278 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:58.592304 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.592348 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:58.592359 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:58.592369 | orchestrator | 2026-04-05 00:29:58.592380 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-05 00:29:58.592390 | orchestrator | Sunday 05 April 2026 00:29:32 +0000 (0:00:00.227) 0:00:42.785 ********** 2026-04-05 00:29:58.592402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:58.592465 | orchestrator | 2026-04-05 00:29:58.592476 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-05 00:29:58.592486 | orchestrator | Sunday 05 April 2026 00:29:32 +0000 (0:00:00.320) 0:00:43.106 ********** 2026-04-05 00:29:58.592496 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:58.592506 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.592516 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:58.592526 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:58.592536 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:58.592546 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:58.592556 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:58.592566 | orchestrator | 2026-04-05 00:29:58.592575 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-05 00:29:58.592584 | orchestrator | Sunday 05 April 2026 00:29:34 +0000 (0:00:01.829) 0:00:44.936 ********** 2026-04-05 00:29:58.592592 | orchestrator | changed: [testbed-manager] 2026-04-05 00:29:58.592615 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:58.592624 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:58.592632 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:58.592641 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:58.592660 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:58.592669 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:58.592677 | orchestrator | 2026-04-05 00:29:58.592686 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-05 00:29:58.592695 | orchestrator | Sunday 05 April 2026 00:29:35 +0000 (0:00:01.214) 0:00:46.151 ********** 2026-04-05 00:29:58.592703 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:58.592712 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:58.592720 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:58.592729 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:58.592737 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.592746 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:58.592755 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:58.592763 | orchestrator | 2026-04-05 00:29:58.592772 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-05 00:29:58.592790 | orchestrator | Sunday 05 April 2026 00:29:36 +0000 (0:00:00.880) 0:00:47.031 ********** 2026-04-05 00:29:58.592805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:58.592815 | orchestrator | 2026-04-05 00:29:58.592824 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-05 00:29:58.592834 | orchestrator | Sunday 05 April 2026 00:29:37 +0000 (0:00:00.329) 0:00:47.361 ********** 2026-04-05 00:29:58.592842 | orchestrator | changed: [testbed-manager] 2026-04-05 00:29:58.592851 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:58.592860 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:58.592868 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:58.592877 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:58.592886 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:58.592894 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:58.592903 | orchestrator | 2026-04-05 00:29:58.592929 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-05 00:29:58.592938 | orchestrator | Sunday 05 April 2026 00:29:38 +0000 (0:00:01.028) 0:00:48.389 ********** 2026-04-05 00:29:58.592947 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:29:58.592956 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:29:58.592964 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:29:58.592973 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:29:58.592984 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:29:58.592998 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:29:58.593010 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:29:58.593019 | orchestrator | 2026-04-05 00:29:58.593028 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-05 00:29:58.593037 | orchestrator | Sunday 05 April 2026 00:29:38 +0000 (0:00:00.229) 0:00:48.619 ********** 2026-04-05 00:29:58.593046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:58.593055 | orchestrator | 2026-04-05 00:29:58.593064 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-05 00:29:58.593072 | orchestrator | Sunday 05 April 2026 00:29:38 +0000 (0:00:00.349) 0:00:48.968 ********** 2026-04-05 00:29:58.593081 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:58.593090 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:58.593099 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.593107 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:58.593116 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:58.593125 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:58.593133 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:58.593142 | orchestrator | 2026-04-05 00:29:58.593157 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-05 00:29:58.593167 | orchestrator | Sunday 05 April 2026 00:29:40 +0000 (0:00:01.713) 0:00:50.682 ********** 2026-04-05 00:29:58.593176 | orchestrator | changed: [testbed-manager] 2026-04-05 00:29:58.593185 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:58.593193 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:58.593202 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:58.593211 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:58.593219 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:58.593228 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:58.593236 | orchestrator | 2026-04-05 00:29:58.593247 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-05 00:29:58.593262 | orchestrator | Sunday 05 April 2026 00:29:41 +0000 (0:00:01.228) 0:00:51.910 ********** 2026-04-05 00:29:58.593276 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:58.593295 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:58.593368 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:58.593383 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:58.593397 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:58.593411 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:58.593425 | orchestrator | changed: [testbed-manager] 2026-04-05 00:29:58.593439 | orchestrator | 2026-04-05 00:29:58.593454 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-05 00:29:58.593469 | orchestrator | Sunday 05 April 2026 00:29:55 +0000 (0:00:13.494) 0:01:05.405 ********** 2026-04-05 00:29:58.593484 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:58.593499 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.593514 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:58.593523 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:58.593532 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:58.593540 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:58.593549 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:58.593557 | orchestrator | 2026-04-05 00:29:58.593566 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-05 00:29:58.593574 | orchestrator | Sunday 05 April 2026 00:29:56 +0000 (0:00:01.458) 0:01:06.864 ********** 2026-04-05 00:29:58.593583 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:58.593591 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:58.593600 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.593608 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:58.593617 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:58.593625 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:58.593634 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:58.593642 | orchestrator | 2026-04-05 00:29:58.593651 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-05 00:29:58.593660 | orchestrator | Sunday 05 April 2026 00:29:57 +0000 (0:00:01.015) 0:01:07.879 ********** 2026-04-05 00:29:58.593668 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:58.593677 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:58.593685 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:58.593694 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:58.593702 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.593711 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:58.593719 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:58.593727 | orchestrator | 2026-04-05 00:29:58.593736 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-05 00:29:58.593745 | orchestrator | Sunday 05 April 2026 00:29:57 +0000 (0:00:00.254) 0:01:08.134 ********** 2026-04-05 00:29:58.593754 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:58.593769 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:58.593778 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:58.593786 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:58.593795 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:58.593803 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:58.593814 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:58.593828 | orchestrator | 2026-04-05 00:29:58.593840 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-05 00:29:58.593852 | orchestrator | Sunday 05 April 2026 00:29:58 +0000 (0:00:00.283) 0:01:08.417 ********** 2026-04-05 00:29:58.593868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:58.593896 | orchestrator | 2026-04-05 00:29:58.593926 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-05 00:32:16.811044 | orchestrator | Sunday 05 April 2026 00:29:58 +0000 (0:00:00.345) 0:01:08.763 ********** 2026-04-05 00:32:16.811160 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:16.811177 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:16.811190 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:16.811201 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:16.811237 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:16.811249 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:16.811260 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:16.811271 | orchestrator | 2026-04-05 00:32:16.811282 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-05 00:32:16.811293 | orchestrator | Sunday 05 April 2026 00:30:00 +0000 (0:00:02.044) 0:01:10.807 ********** 2026-04-05 00:32:16.811304 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:16.811316 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:16.811327 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:16.811337 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:16.811348 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:16.811358 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:16.811369 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:16.811379 | orchestrator | 2026-04-05 00:32:16.811390 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-05 00:32:16.811402 | orchestrator | Sunday 05 April 2026 00:30:01 +0000 (0:00:00.609) 0:01:11.417 ********** 2026-04-05 00:32:16.811412 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:16.811423 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:16.811434 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:16.811444 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:16.811455 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:16.811466 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:16.811476 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:16.811487 | orchestrator | 2026-04-05 00:32:16.811497 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-05 00:32:16.811508 | orchestrator | Sunday 05 April 2026 00:30:01 +0000 (0:00:00.253) 0:01:11.671 ********** 2026-04-05 00:32:16.811519 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:16.811529 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:16.811540 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:16.811550 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:16.811561 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:16.811573 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:16.811584 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:16.811597 | orchestrator | 2026-04-05 00:32:16.811610 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-05 00:32:16.811623 | orchestrator | Sunday 05 April 2026 00:30:02 +0000 (0:00:01.354) 0:01:13.025 ********** 2026-04-05 00:32:16.811635 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:16.811647 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:16.811660 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:16.811673 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:16.811685 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:16.811698 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:16.811744 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:16.811762 | orchestrator | 2026-04-05 00:32:16.811781 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-05 00:32:16.811799 | orchestrator | Sunday 05 April 2026 00:30:04 +0000 (0:00:01.932) 0:01:14.957 ********** 2026-04-05 00:32:16.811819 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:16.811831 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:16.811842 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:16.811852 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:16.811863 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:16.811873 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:16.811884 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:16.811894 | orchestrator | 2026-04-05 00:32:16.811905 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-05 00:32:16.811916 | orchestrator | Sunday 05 April 2026 00:30:07 +0000 (0:00:02.630) 0:01:17.587 ********** 2026-04-05 00:32:16.811926 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:16.811937 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:16.811947 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:16.811967 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:16.811978 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:16.811988 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:16.811999 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:16.812009 | orchestrator | 2026-04-05 00:32:16.812020 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-05 00:32:16.812031 | orchestrator | Sunday 05 April 2026 00:30:39 +0000 (0:00:32.076) 0:01:49.664 ********** 2026-04-05 00:32:16.812042 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:16.812052 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:16.812063 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:16.812073 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:16.812084 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:16.812094 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:16.812105 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:16.812115 | orchestrator | 2026-04-05 00:32:16.812126 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-05 00:32:16.812137 | orchestrator | Sunday 05 April 2026 00:32:00 +0000 (0:01:21.058) 0:03:10.722 ********** 2026-04-05 00:32:16.812147 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:16.812158 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:16.812169 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:16.812179 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:16.812190 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:16.812201 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:16.812211 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:16.812222 | orchestrator | 2026-04-05 00:32:16.812233 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-05 00:32:16.812243 | orchestrator | Sunday 05 April 2026 00:32:02 +0000 (0:00:01.896) 0:03:12.618 ********** 2026-04-05 00:32:16.812254 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:16.812265 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:16.812275 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:16.812285 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:16.812296 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:16.812306 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:16.812317 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:16.812327 | orchestrator | 2026-04-05 00:32:16.812338 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-05 00:32:16.812349 | orchestrator | Sunday 05 April 2026 00:32:15 +0000 (0:00:13.179) 0:03:25.798 ********** 2026-04-05 00:32:16.812395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-05 00:32:16.812420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-05 00:32:16.812436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-05 00:32:16.812461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-05 00:32:16.812474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-05 00:32:16.812495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-05 00:32:16.812507 | orchestrator | 2026-04-05 00:32:16.812518 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-05 00:32:16.812529 | orchestrator | Sunday 05 April 2026 00:32:16 +0000 (0:00:00.417) 0:03:26.216 ********** 2026-04-05 00:32:16.812540 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:32:16.812551 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:16.812562 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:32:16.812573 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:16.812584 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:32:16.812599 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:16.812619 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:32:16.812635 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:16.812650 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:32:16.812667 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:32:16.812682 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:32:16.812699 | orchestrator | 2026-04-05 00:32:16.812784 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-05 00:32:16.812802 | orchestrator | Sunday 05 April 2026 00:32:16 +0000 (0:00:00.704) 0:03:26.920 ********** 2026-04-05 00:32:16.812819 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:32:16.812838 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:32:16.812856 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:32:16.812874 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:32:16.812893 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:32:16.812926 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:32:24.434230 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:32:24.434343 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:32:24.434358 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:32:24.434370 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:32:24.434383 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:24.434395 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:32:24.434426 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:32:24.434438 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:32:24.434449 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:32:24.434459 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:32:24.434470 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:32:24.434481 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:32:24.434492 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:32:24.434502 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:32:24.434513 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:32:24.434524 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:32:24.434535 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:32:24.434545 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:32:24.434556 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:32:24.434567 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:32:24.434578 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:32:24.434588 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:32:24.434599 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:32:24.434609 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:32:24.434620 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:32:24.434631 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:24.434642 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:24.434652 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:32:24.434663 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:32:24.434674 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:32:24.434684 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:32:24.434695 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:32:24.434706 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:32:24.434716 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:32:24.434761 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:32:24.434791 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:32:24.434804 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:32:24.434818 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:24.434830 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 00:32:24.434852 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 00:32:24.434865 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 00:32:24.434877 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 00:32:24.434890 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 00:32:24.434921 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 00:32:24.434935 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 00:32:24.434947 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 00:32:24.434960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 00:32:24.434985 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 00:32:24.434998 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 00:32:24.435011 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 00:32:24.435023 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 00:32:24.435035 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 00:32:24.435047 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 00:32:24.435060 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 00:32:24.435073 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 00:32:24.435085 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 00:32:24.435097 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 00:32:24.435109 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 00:32:24.435123 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 00:32:24.435134 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 00:32:24.435145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 00:32:24.435156 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 00:32:24.435167 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 00:32:24.435177 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 00:32:24.435188 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 00:32:24.435200 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 00:32:24.435220 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 00:32:24.435240 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 00:32:24.435259 | orchestrator | 2026-04-05 00:32:24.435279 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-05 00:32:24.435300 | orchestrator | Sunday 05 April 2026 00:32:23 +0000 (0:00:06.517) 0:03:33.438 ********** 2026-04-05 00:32:24.435320 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:24.435347 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:24.435359 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:24.435369 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:24.435380 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:24.435391 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:24.435401 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:24.435412 | orchestrator | 2026-04-05 00:32:24.435427 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-05 00:32:24.435447 | orchestrator | Sunday 05 April 2026 00:32:23 +0000 (0:00:00.599) 0:03:34.038 ********** 2026-04-05 00:32:24.435473 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:24.435492 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:24.435510 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:24.435527 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:24.435547 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:24.435565 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:24.435583 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:24.435595 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:24.435606 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:24.435617 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:24.435638 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:37.618999 | orchestrator | 2026-04-05 00:32:37.619118 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-05 00:32:37.619134 | orchestrator | Sunday 05 April 2026 00:32:24 +0000 (0:00:00.608) 0:03:34.647 ********** 2026-04-05 00:32:37.619146 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:37.619158 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:37.619171 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:37.619182 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:37.619193 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:37.619204 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:37.619215 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:37.619226 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:37.619267 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:37.619279 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:37.619290 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:37.619301 | orchestrator | 2026-04-05 00:32:37.619312 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-05 00:32:37.619324 | orchestrator | Sunday 05 April 2026 00:32:24 +0000 (0:00:00.530) 0:03:35.177 ********** 2026-04-05 00:32:37.619334 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:32:37.619345 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:32:37.619384 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:37.619395 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:32:37.619406 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:37.619417 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:37.619428 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:32:37.619438 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:37.619449 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 00:32:37.619460 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 00:32:37.619471 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 00:32:37.619482 | orchestrator | 2026-04-05 00:32:37.619492 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-05 00:32:37.619504 | orchestrator | Sunday 05 April 2026 00:32:25 +0000 (0:00:00.719) 0:03:35.897 ********** 2026-04-05 00:32:37.619516 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:37.619527 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:37.619540 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:37.619553 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:37.619565 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:37.619577 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:37.619589 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:37.619601 | orchestrator | 2026-04-05 00:32:37.619614 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-05 00:32:37.619628 | orchestrator | Sunday 05 April 2026 00:32:26 +0000 (0:00:00.322) 0:03:36.220 ********** 2026-04-05 00:32:37.619640 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:37.619652 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:37.619665 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:37.619677 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:37.619689 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:37.619701 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:37.619713 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:37.619725 | orchestrator | 2026-04-05 00:32:37.619737 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-05 00:32:37.619750 | orchestrator | Sunday 05 April 2026 00:32:32 +0000 (0:00:06.001) 0:03:42.221 ********** 2026-04-05 00:32:37.619785 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-05 00:32:37.619798 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:37.619810 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-05 00:32:37.619824 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-05 00:32:37.619836 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:37.619849 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-05 00:32:37.619861 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:37.619874 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-05 00:32:37.619887 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:37.619898 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-05 00:32:37.619909 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:37.619919 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:37.619930 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-05 00:32:37.619941 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:37.619952 | orchestrator | 2026-04-05 00:32:37.619963 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-05 00:32:37.619974 | orchestrator | Sunday 05 April 2026 00:32:32 +0000 (0:00:00.339) 0:03:42.560 ********** 2026-04-05 00:32:37.619985 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-05 00:32:37.619996 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-05 00:32:37.620015 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-05 00:32:37.620043 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-05 00:32:37.620055 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-05 00:32:37.620066 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-05 00:32:37.620077 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-05 00:32:37.620088 | orchestrator | 2026-04-05 00:32:37.620099 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-05 00:32:37.620110 | orchestrator | Sunday 05 April 2026 00:32:33 +0000 (0:00:01.053) 0:03:43.613 ********** 2026-04-05 00:32:37.620122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:32:37.620136 | orchestrator | 2026-04-05 00:32:37.620164 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-05 00:32:37.620176 | orchestrator | Sunday 05 April 2026 00:32:33 +0000 (0:00:00.460) 0:03:44.074 ********** 2026-04-05 00:32:37.620187 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:37.620199 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:37.620210 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:37.620221 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:37.620231 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:37.620242 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:37.620253 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:37.620264 | orchestrator | 2026-04-05 00:32:37.620275 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-05 00:32:37.620286 | orchestrator | Sunday 05 April 2026 00:32:35 +0000 (0:00:01.343) 0:03:45.417 ********** 2026-04-05 00:32:37.620297 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:37.620307 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:37.620318 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:37.620329 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:37.620339 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:37.620350 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:37.620361 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:37.620371 | orchestrator | 2026-04-05 00:32:37.620382 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-05 00:32:37.620393 | orchestrator | Sunday 05 April 2026 00:32:35 +0000 (0:00:00.620) 0:03:46.038 ********** 2026-04-05 00:32:37.620404 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:37.620415 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:37.620426 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:37.620449 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:37.620460 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:37.620482 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:37.620493 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:37.620504 | orchestrator | 2026-04-05 00:32:37.620515 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-05 00:32:37.620526 | orchestrator | Sunday 05 April 2026 00:32:36 +0000 (0:00:00.649) 0:03:46.687 ********** 2026-04-05 00:32:37.620537 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:37.620548 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:37.620560 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:37.620571 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:37.620582 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:37.620593 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:37.620604 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:37.620614 | orchestrator | 2026-04-05 00:32:37.620625 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-05 00:32:37.620637 | orchestrator | Sunday 05 April 2026 00:32:37 +0000 (0:00:00.586) 0:03:47.274 ********** 2026-04-05 00:32:37.620652 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347509.8428438, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:37.620679 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347562.2248418, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:37.620692 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347530.0870671, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:37.620714 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347539.5395947, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:43.284034 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347552.420015, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:43.284173 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347535.9821274, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:43.284213 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347537.621927, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:43.284227 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:43.284267 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:43.284294 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:43.284306 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:43.284346 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:43.284359 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:43.284371 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:43.284383 | orchestrator | 2026-04-05 00:32:43.284396 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-05 00:32:43.284417 | orchestrator | Sunday 05 April 2026 00:32:38 +0000 (0:00:00.973) 0:03:48.247 ********** 2026-04-05 00:32:43.284428 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:43.284440 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:43.284451 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:43.284462 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:43.284473 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:43.284483 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:43.284494 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:43.284505 | orchestrator | 2026-04-05 00:32:43.284516 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-05 00:32:43.284527 | orchestrator | Sunday 05 April 2026 00:32:39 +0000 (0:00:01.131) 0:03:49.379 ********** 2026-04-05 00:32:43.284538 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:43.284548 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:43.284561 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:43.284573 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:43.284585 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:43.284597 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:43.284609 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:43.284622 | orchestrator | 2026-04-05 00:32:43.284634 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-05 00:32:43.284647 | orchestrator | Sunday 05 April 2026 00:32:40 +0000 (0:00:01.194) 0:03:50.573 ********** 2026-04-05 00:32:43.284659 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:43.284671 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:43.284683 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:43.284695 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:43.284708 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:43.284720 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:43.284732 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:43.284744 | orchestrator | 2026-04-05 00:32:43.284764 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-05 00:32:43.284824 | orchestrator | Sunday 05 April 2026 00:32:41 +0000 (0:00:01.445) 0:03:52.019 ********** 2026-04-05 00:32:43.284842 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:43.284859 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:43.284876 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:43.284894 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:43.284913 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:43.284932 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:43.284950 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:43.284968 | orchestrator | 2026-04-05 00:32:43.284986 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-05 00:32:43.285005 | orchestrator | Sunday 05 April 2026 00:32:42 +0000 (0:00:00.252) 0:03:52.272 ********** 2026-04-05 00:32:43.285025 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:43.285046 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:43.285063 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:43.285080 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:43.285091 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:43.285102 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:43.285113 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:43.285123 | orchestrator | 2026-04-05 00:32:43.285134 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-05 00:32:43.285145 | orchestrator | Sunday 05 April 2026 00:32:42 +0000 (0:00:00.720) 0:03:52.992 ********** 2026-04-05 00:32:43.285159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:32:43.285172 | orchestrator | 2026-04-05 00:32:43.285183 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-05 00:32:43.285205 | orchestrator | Sunday 05 April 2026 00:32:43 +0000 (0:00:00.466) 0:03:53.459 ********** 2026-04-05 00:34:04.994556 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:04.994665 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:04.994682 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:04.994694 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:04.994706 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:04.994719 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:04.994731 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:04.994743 | orchestrator | 2026-04-05 00:34:04.994756 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-05 00:34:04.994769 | orchestrator | Sunday 05 April 2026 00:32:52 +0000 (0:00:08.934) 0:04:02.394 ********** 2026-04-05 00:34:04.994781 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:04.994793 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:04.994805 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:04.994816 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:04.994828 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:04.994840 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:04.994852 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:04.994864 | orchestrator | 2026-04-05 00:34:04.994876 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-05 00:34:04.994888 | orchestrator | Sunday 05 April 2026 00:32:53 +0000 (0:00:01.415) 0:04:03.810 ********** 2026-04-05 00:34:04.994900 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:04.994911 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:04.994921 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:04.994932 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:04.994944 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:04.995014 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:04.995024 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:04.995036 | orchestrator | 2026-04-05 00:34:04.995046 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-05 00:34:04.995058 | orchestrator | Sunday 05 April 2026 00:32:54 +0000 (0:00:00.992) 0:04:04.803 ********** 2026-04-05 00:34:04.995069 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:04.995081 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:04.995093 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:04.995106 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:04.995118 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:04.995131 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:04.995143 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:04.995156 | orchestrator | 2026-04-05 00:34:04.995167 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-05 00:34:04.995179 | orchestrator | Sunday 05 April 2026 00:32:54 +0000 (0:00:00.330) 0:04:05.133 ********** 2026-04-05 00:34:04.995190 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:04.995203 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:04.995215 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:04.995227 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:04.995239 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:04.995250 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:04.995262 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:04.995272 | orchestrator | 2026-04-05 00:34:04.995284 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-05 00:34:04.995298 | orchestrator | Sunday 05 April 2026 00:32:55 +0000 (0:00:00.304) 0:04:05.438 ********** 2026-04-05 00:34:04.995316 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:04.995332 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:04.995345 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:04.995357 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:04.995367 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:04.995378 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:04.995388 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:04.995399 | orchestrator | 2026-04-05 00:34:04.995409 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-05 00:34:04.995451 | orchestrator | Sunday 05 April 2026 00:32:55 +0000 (0:00:00.340) 0:04:05.779 ********** 2026-04-05 00:34:04.995463 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:04.995473 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:04.995484 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:04.995494 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:04.995503 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:04.995512 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:04.995521 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:04.995530 | orchestrator | 2026-04-05 00:34:04.995540 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-05 00:34:04.995551 | orchestrator | Sunday 05 April 2026 00:33:01 +0000 (0:00:05.853) 0:04:11.632 ********** 2026-04-05 00:34:04.995562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:34:04.995573 | orchestrator | 2026-04-05 00:34:04.995583 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-05 00:34:04.995594 | orchestrator | Sunday 05 April 2026 00:33:01 +0000 (0:00:00.433) 0:04:12.065 ********** 2026-04-05 00:34:04.995606 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-05 00:34:04.995615 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-05 00:34:04.995624 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:04.995633 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-05 00:34:04.995644 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-05 00:34:04.995655 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-05 00:34:04.995666 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-05 00:34:04.995677 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:04.995688 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:04.995698 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-05 00:34:04.995708 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-05 00:34:04.995719 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-05 00:34:04.995731 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-05 00:34:04.995740 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:04.995750 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-05 00:34:04.995761 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-05 00:34:04.995793 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:04.995805 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:04.995816 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-05 00:34:04.995827 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-05 00:34:04.995838 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:04.995849 | orchestrator | 2026-04-05 00:34:04.995860 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-05 00:34:04.995890 | orchestrator | Sunday 05 April 2026 00:33:02 +0000 (0:00:00.389) 0:04:12.455 ********** 2026-04-05 00:34:04.995903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:34:04.995914 | orchestrator | 2026-04-05 00:34:04.995925 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-05 00:34:04.995972 | orchestrator | Sunday 05 April 2026 00:33:02 +0000 (0:00:00.548) 0:04:13.003 ********** 2026-04-05 00:34:04.995983 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-05 00:34:04.995993 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:04.996003 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-05 00:34:04.996025 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:04.996035 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-05 00:34:04.996045 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-05 00:34:04.996055 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:04.996066 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-05 00:34:04.996075 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:04.996081 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-05 00:34:04.996087 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:04.996093 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:04.996100 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-05 00:34:04.996106 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:04.996112 | orchestrator | 2026-04-05 00:34:04.996118 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-05 00:34:04.996125 | orchestrator | Sunday 05 April 2026 00:33:03 +0000 (0:00:00.347) 0:04:13.351 ********** 2026-04-05 00:34:04.996131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:34:04.996138 | orchestrator | 2026-04-05 00:34:04.996144 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-05 00:34:04.996150 | orchestrator | Sunday 05 April 2026 00:33:03 +0000 (0:00:00.411) 0:04:13.763 ********** 2026-04-05 00:34:04.996156 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:04.996163 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:04.996169 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:04.996175 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:04.996181 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:04.996187 | orchestrator | changed: [testbed-manager] 2026-04-05 00:34:04.996193 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:04.996199 | orchestrator | 2026-04-05 00:34:04.996206 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-05 00:34:04.996212 | orchestrator | Sunday 05 April 2026 00:33:40 +0000 (0:00:36.769) 0:04:50.533 ********** 2026-04-05 00:34:04.996218 | orchestrator | changed: [testbed-manager] 2026-04-05 00:34:04.996224 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:04.996230 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:04.996236 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:04.996242 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:04.996254 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:04.996260 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:04.996280 | orchestrator | 2026-04-05 00:34:04.996287 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-05 00:34:04.996293 | orchestrator | Sunday 05 April 2026 00:33:49 +0000 (0:00:08.779) 0:04:59.312 ********** 2026-04-05 00:34:04.996299 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:04.996306 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:04.996312 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:04.996318 | orchestrator | changed: [testbed-manager] 2026-04-05 00:34:04.996324 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:04.996330 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:04.996336 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:04.996342 | orchestrator | 2026-04-05 00:34:04.996349 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-05 00:34:04.996355 | orchestrator | Sunday 05 April 2026 00:33:56 +0000 (0:00:07.793) 0:05:07.105 ********** 2026-04-05 00:34:04.996361 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:04.996367 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:04.996373 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:04.996379 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:04.996391 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:04.996398 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:04.996404 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:04.996410 | orchestrator | 2026-04-05 00:34:04.996416 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-05 00:34:04.996422 | orchestrator | Sunday 05 April 2026 00:33:58 +0000 (0:00:01.882) 0:05:08.987 ********** 2026-04-05 00:34:04.996428 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:04.996444 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:04.996451 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:04.996457 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:04.996463 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:04.996471 | orchestrator | changed: [testbed-manager] 2026-04-05 00:34:04.996478 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:04.996485 | orchestrator | 2026-04-05 00:34:04.996502 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-05 00:34:16.793329 | orchestrator | Sunday 05 April 2026 00:34:04 +0000 (0:00:06.176) 0:05:15.163 ********** 2026-04-05 00:34:16.793434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:34:16.793447 | orchestrator | 2026-04-05 00:34:16.793458 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-05 00:34:16.793466 | orchestrator | Sunday 05 April 2026 00:34:05 +0000 (0:00:00.445) 0:05:15.609 ********** 2026-04-05 00:34:16.793475 | orchestrator | changed: [testbed-manager] 2026-04-05 00:34:16.793483 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:16.793491 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:16.793505 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:16.793518 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:16.793531 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:16.793544 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:16.793558 | orchestrator | 2026-04-05 00:34:16.793570 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-05 00:34:16.793585 | orchestrator | Sunday 05 April 2026 00:34:06 +0000 (0:00:00.712) 0:05:16.322 ********** 2026-04-05 00:34:16.793598 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:16.793613 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:16.793626 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:16.793640 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:16.793649 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:16.793657 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:16.793665 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:16.793672 | orchestrator | 2026-04-05 00:34:16.793681 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-05 00:34:16.793689 | orchestrator | Sunday 05 April 2026 00:34:08 +0000 (0:00:01.910) 0:05:18.232 ********** 2026-04-05 00:34:16.793697 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:16.793704 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:16.793712 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:16.793720 | orchestrator | changed: [testbed-manager] 2026-04-05 00:34:16.793728 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:16.793736 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:16.793743 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:16.793751 | orchestrator | 2026-04-05 00:34:16.793759 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-05 00:34:16.793767 | orchestrator | Sunday 05 April 2026 00:34:08 +0000 (0:00:00.777) 0:05:19.009 ********** 2026-04-05 00:34:16.793775 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:16.793782 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:16.793790 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:16.793798 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:16.793806 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:16.793832 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:16.793840 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:16.793848 | orchestrator | 2026-04-05 00:34:16.793856 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-05 00:34:16.793864 | orchestrator | Sunday 05 April 2026 00:34:09 +0000 (0:00:00.295) 0:05:19.305 ********** 2026-04-05 00:34:16.793872 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:16.793881 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:16.793890 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:16.793900 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:16.793909 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:16.793918 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:16.793927 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:16.793937 | orchestrator | 2026-04-05 00:34:16.793947 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-05 00:34:16.793957 | orchestrator | Sunday 05 April 2026 00:34:09 +0000 (0:00:00.404) 0:05:19.709 ********** 2026-04-05 00:34:16.793966 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:16.794004 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:16.794059 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:16.794070 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:16.794091 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:16.794101 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:16.794110 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:16.794120 | orchestrator | 2026-04-05 00:34:16.794129 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-05 00:34:16.794138 | orchestrator | Sunday 05 April 2026 00:34:09 +0000 (0:00:00.420) 0:05:20.130 ********** 2026-04-05 00:34:16.794147 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:16.794157 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:16.794166 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:16.794175 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:16.794184 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:16.794193 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:16.794202 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:16.794212 | orchestrator | 2026-04-05 00:34:16.794221 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-05 00:34:16.794232 | orchestrator | Sunday 05 April 2026 00:34:10 +0000 (0:00:00.286) 0:05:20.416 ********** 2026-04-05 00:34:16.794239 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:16.794247 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:16.794255 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:16.794263 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:16.794270 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:16.794278 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:16.794286 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:16.794293 | orchestrator | 2026-04-05 00:34:16.794302 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-05 00:34:16.794310 | orchestrator | Sunday 05 April 2026 00:34:10 +0000 (0:00:00.322) 0:05:20.738 ********** 2026-04-05 00:34:16.794318 | orchestrator | ok: [testbed-manager] =>  2026-04-05 00:34:16.794325 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:16.794333 | orchestrator | ok: [testbed-node-0] =>  2026-04-05 00:34:16.794341 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:16.794349 | orchestrator | ok: [testbed-node-1] =>  2026-04-05 00:34:16.794357 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:16.794364 | orchestrator | ok: [testbed-node-2] =>  2026-04-05 00:34:16.794372 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:16.794397 | orchestrator | ok: [testbed-node-3] =>  2026-04-05 00:34:16.794406 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:16.794414 | orchestrator | ok: [testbed-node-4] =>  2026-04-05 00:34:16.794421 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:16.794429 | orchestrator | ok: [testbed-node-5] =>  2026-04-05 00:34:16.794437 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:16.794451 | orchestrator | 2026-04-05 00:34:16.794459 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-05 00:34:16.794467 | orchestrator | Sunday 05 April 2026 00:34:10 +0000 (0:00:00.288) 0:05:21.027 ********** 2026-04-05 00:34:16.794475 | orchestrator | ok: [testbed-manager] =>  2026-04-05 00:34:16.794483 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:16.794491 | orchestrator | ok: [testbed-node-0] =>  2026-04-05 00:34:16.794504 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:16.794517 | orchestrator | ok: [testbed-node-1] =>  2026-04-05 00:34:16.794531 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:16.794545 | orchestrator | ok: [testbed-node-2] =>  2026-04-05 00:34:16.794558 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:16.794571 | orchestrator | ok: [testbed-node-3] =>  2026-04-05 00:34:16.794584 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:16.794597 | orchestrator | ok: [testbed-node-4] =>  2026-04-05 00:34:16.794610 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:16.794623 | orchestrator | ok: [testbed-node-5] =>  2026-04-05 00:34:16.794637 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:16.794650 | orchestrator | 2026-04-05 00:34:16.794664 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-05 00:34:16.794677 | orchestrator | Sunday 05 April 2026 00:34:11 +0000 (0:00:00.282) 0:05:21.309 ********** 2026-04-05 00:34:16.794690 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:16.794704 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:16.794718 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:16.794731 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:16.794746 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:16.794761 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:16.794775 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:16.794790 | orchestrator | 2026-04-05 00:34:16.794799 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-05 00:34:16.794807 | orchestrator | Sunday 05 April 2026 00:34:11 +0000 (0:00:00.311) 0:05:21.620 ********** 2026-04-05 00:34:16.794814 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:16.794822 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:16.794830 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:16.794838 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:16.794846 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:16.794853 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:16.794861 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:16.794869 | orchestrator | 2026-04-05 00:34:16.794877 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-05 00:34:16.794885 | orchestrator | Sunday 05 April 2026 00:34:11 +0000 (0:00:00.270) 0:05:21.891 ********** 2026-04-05 00:34:16.794894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:34:16.794904 | orchestrator | 2026-04-05 00:34:16.794911 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-05 00:34:16.794919 | orchestrator | Sunday 05 April 2026 00:34:12 +0000 (0:00:00.449) 0:05:22.340 ********** 2026-04-05 00:34:16.794927 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:16.794935 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:16.794943 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:16.794951 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:16.794958 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:16.794966 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:16.794999 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:16.795006 | orchestrator | 2026-04-05 00:34:16.795014 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-05 00:34:16.795028 | orchestrator | Sunday 05 April 2026 00:34:13 +0000 (0:00:00.903) 0:05:23.243 ********** 2026-04-05 00:34:16.795043 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:16.795051 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:16.795058 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:16.795066 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:16.795074 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:16.795081 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:16.795089 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:16.795097 | orchestrator | 2026-04-05 00:34:16.795105 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-05 00:34:16.795115 | orchestrator | Sunday 05 April 2026 00:34:16 +0000 (0:00:03.320) 0:05:26.564 ********** 2026-04-05 00:34:16.795123 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-05 00:34:16.795131 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-05 00:34:16.795139 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-05 00:34:16.795146 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:16.795154 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-05 00:34:16.795162 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-05 00:34:16.795170 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-05 00:34:16.795178 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-05 00:34:16.795186 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-05 00:34:16.795193 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-05 00:34:16.795201 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:16.795209 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-05 00:34:16.795217 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-05 00:34:16.795224 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-05 00:34:16.795232 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:16.795240 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-05 00:34:16.795255 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-05 00:35:18.283025 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-05 00:35:18.283229 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:18.283260 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-05 00:35:18.283280 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-05 00:35:18.283299 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-05 00:35:18.283320 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:18.283338 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:18.283358 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-05 00:35:18.283377 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-05 00:35:18.283396 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-05 00:35:18.283415 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:18.283433 | orchestrator | 2026-04-05 00:35:18.283454 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-05 00:35:18.283475 | orchestrator | Sunday 05 April 2026 00:34:17 +0000 (0:00:00.656) 0:05:27.220 ********** 2026-04-05 00:35:18.283494 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:18.283514 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:18.283534 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:18.283554 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:18.283575 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:18.283595 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:18.283615 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:18.283634 | orchestrator | 2026-04-05 00:35:18.283654 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-05 00:35:18.283674 | orchestrator | Sunday 05 April 2026 00:34:23 +0000 (0:00:06.467) 0:05:33.688 ********** 2026-04-05 00:35:18.283695 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:18.283755 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:18.283775 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:18.283795 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:18.283814 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:18.283833 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:18.283851 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:18.283870 | orchestrator | 2026-04-05 00:35:18.283890 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-05 00:35:18.283910 | orchestrator | Sunday 05 April 2026 00:34:24 +0000 (0:00:01.065) 0:05:34.753 ********** 2026-04-05 00:35:18.283931 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:18.283951 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:18.283971 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:18.283992 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:18.284012 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:18.284031 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:18.284050 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:18.284109 | orchestrator | 2026-04-05 00:35:18.284131 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-05 00:35:18.284149 | orchestrator | Sunday 05 April 2026 00:34:33 +0000 (0:00:08.871) 0:05:43.625 ********** 2026-04-05 00:35:18.284166 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:18.284185 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:18.284204 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:18.284223 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:18.284241 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:18.284258 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:18.284276 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:18.284294 | orchestrator | 2026-04-05 00:35:18.284312 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-05 00:35:18.284331 | orchestrator | Sunday 05 April 2026 00:34:37 +0000 (0:00:03.780) 0:05:47.406 ********** 2026-04-05 00:35:18.284349 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:18.284368 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:18.284384 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:18.284400 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:18.284418 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:18.284434 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:18.284452 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:18.284469 | orchestrator | 2026-04-05 00:35:18.284487 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-05 00:35:18.284506 | orchestrator | Sunday 05 April 2026 00:34:38 +0000 (0:00:01.372) 0:05:48.778 ********** 2026-04-05 00:35:18.284523 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:18.284541 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:18.284559 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:18.284578 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:18.284595 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:18.284611 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:18.284622 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:18.284632 | orchestrator | 2026-04-05 00:35:18.284643 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-05 00:35:18.284655 | orchestrator | Sunday 05 April 2026 00:34:39 +0000 (0:00:01.394) 0:05:50.173 ********** 2026-04-05 00:35:18.284666 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:18.284676 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:18.284687 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:18.284698 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:18.284708 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:18.284719 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:18.284730 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:18.284740 | orchestrator | 2026-04-05 00:35:18.284751 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-05 00:35:18.284780 | orchestrator | Sunday 05 April 2026 00:34:40 +0000 (0:00:00.695) 0:05:50.869 ********** 2026-04-05 00:35:18.284791 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:18.284802 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:18.284812 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:18.284823 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:18.284834 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:18.284844 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:18.284855 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:18.284865 | orchestrator | 2026-04-05 00:35:18.284876 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-05 00:35:18.284912 | orchestrator | Sunday 05 April 2026 00:34:50 +0000 (0:00:09.890) 0:06:00.759 ********** 2026-04-05 00:35:18.284924 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:18.284934 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:18.284945 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:18.284956 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:18.284966 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:18.284977 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:18.284987 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:18.284998 | orchestrator | 2026-04-05 00:35:18.285009 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-05 00:35:18.285020 | orchestrator | Sunday 05 April 2026 00:34:51 +0000 (0:00:01.198) 0:06:01.958 ********** 2026-04-05 00:35:18.285030 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:18.285041 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:18.285051 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:18.285090 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:18.285110 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:18.285130 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:18.285148 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:18.285166 | orchestrator | 2026-04-05 00:35:18.285177 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-05 00:35:18.285188 | orchestrator | Sunday 05 April 2026 00:35:01 +0000 (0:00:09.295) 0:06:11.254 ********** 2026-04-05 00:35:18.285199 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:18.285210 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:18.285220 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:18.285231 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:18.285242 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:18.285252 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:18.285263 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:18.285273 | orchestrator | 2026-04-05 00:35:18.285284 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-05 00:35:18.285295 | orchestrator | Sunday 05 April 2026 00:35:11 +0000 (0:00:10.776) 0:06:22.031 ********** 2026-04-05 00:35:18.285305 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-05 00:35:18.285316 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-05 00:35:18.285328 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-05 00:35:18.285347 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-05 00:35:18.285364 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-05 00:35:18.285382 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-05 00:35:18.285401 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-05 00:35:18.285419 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-05 00:35:18.285434 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-05 00:35:18.285445 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-05 00:35:18.285456 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-05 00:35:18.285466 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-05 00:35:18.285477 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-05 00:35:18.285488 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-05 00:35:18.285507 | orchestrator | 2026-04-05 00:35:18.285518 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-05 00:35:18.285529 | orchestrator | Sunday 05 April 2026 00:35:12 +0000 (0:00:01.127) 0:06:23.158 ********** 2026-04-05 00:35:18.285539 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:18.285549 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:18.285560 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:18.285571 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:18.285581 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:18.285592 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:18.285602 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:18.285613 | orchestrator | 2026-04-05 00:35:18.285623 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-05 00:35:18.285685 | orchestrator | Sunday 05 April 2026 00:35:13 +0000 (0:00:00.617) 0:06:23.775 ********** 2026-04-05 00:35:18.285697 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:18.285713 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:18.285724 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:18.285734 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:18.285745 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:18.285755 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:18.285766 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:18.285777 | orchestrator | 2026-04-05 00:35:18.285788 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-05 00:35:18.285800 | orchestrator | Sunday 05 April 2026 00:35:17 +0000 (0:00:03.850) 0:06:27.626 ********** 2026-04-05 00:35:18.285810 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:18.285821 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:18.285831 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:18.285842 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:18.285852 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:18.285863 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:18.285873 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:18.285884 | orchestrator | 2026-04-05 00:35:18.285896 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-05 00:35:18.285907 | orchestrator | Sunday 05 April 2026 00:35:18 +0000 (0:00:00.566) 0:06:28.193 ********** 2026-04-05 00:35:18.285917 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-05 00:35:18.285928 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-05 00:35:18.285939 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:18.285950 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-05 00:35:18.285961 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-05 00:35:18.285971 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:18.285982 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-05 00:35:18.285992 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-05 00:35:18.286003 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:18.286109 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-05 00:35:37.770760 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-05 00:35:37.770875 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:37.770892 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-05 00:35:37.770904 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-05 00:35:37.770916 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:37.770927 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-05 00:35:37.770937 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-05 00:35:37.770948 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:37.770959 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-05 00:35:37.770998 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-05 00:35:37.771010 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:37.771021 | orchestrator | 2026-04-05 00:35:37.771034 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-05 00:35:37.771045 | orchestrator | Sunday 05 April 2026 00:35:18 +0000 (0:00:00.550) 0:06:28.743 ********** 2026-04-05 00:35:37.771056 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:37.771107 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:37.771120 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:37.771131 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:37.771142 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:37.771152 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:37.771163 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:37.771174 | orchestrator | 2026-04-05 00:35:37.771185 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-05 00:35:37.771196 | orchestrator | Sunday 05 April 2026 00:35:19 +0000 (0:00:00.531) 0:06:29.275 ********** 2026-04-05 00:35:37.771207 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:37.771218 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:37.771228 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:37.771239 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:37.771250 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:37.771261 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:37.771279 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:37.771299 | orchestrator | 2026-04-05 00:35:37.771319 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-05 00:35:37.771338 | orchestrator | Sunday 05 April 2026 00:35:19 +0000 (0:00:00.722) 0:06:29.998 ********** 2026-04-05 00:35:37.771359 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:37.771379 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:37.771398 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:37.771418 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:37.771438 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:37.771459 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:37.771479 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:37.771499 | orchestrator | 2026-04-05 00:35:37.771520 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-05 00:35:37.771532 | orchestrator | Sunday 05 April 2026 00:35:20 +0000 (0:00:00.552) 0:06:30.551 ********** 2026-04-05 00:35:37.771543 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:37.771554 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:37.771564 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:37.771575 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:37.771585 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:37.771596 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:37.771606 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:37.771617 | orchestrator | 2026-04-05 00:35:37.771627 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-05 00:35:37.771638 | orchestrator | Sunday 05 April 2026 00:35:22 +0000 (0:00:01.785) 0:06:32.336 ********** 2026-04-05 00:35:37.771650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:37.771663 | orchestrator | 2026-04-05 00:35:37.771675 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-05 00:35:37.771685 | orchestrator | Sunday 05 April 2026 00:35:23 +0000 (0:00:00.890) 0:06:33.227 ********** 2026-04-05 00:35:37.771696 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:37.771707 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:37.771718 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:37.771728 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:37.771739 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:37.771762 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:37.771773 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:37.771783 | orchestrator | 2026-04-05 00:35:37.771794 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-05 00:35:37.771805 | orchestrator | Sunday 05 April 2026 00:35:24 +0000 (0:00:00.996) 0:06:34.223 ********** 2026-04-05 00:35:37.771815 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:37.771826 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:37.771836 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:37.771847 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:37.771857 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:37.771867 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:37.771878 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:37.771889 | orchestrator | 2026-04-05 00:35:37.771899 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-05 00:35:37.771910 | orchestrator | Sunday 05 April 2026 00:35:24 +0000 (0:00:00.884) 0:06:35.108 ********** 2026-04-05 00:35:37.771921 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:37.771931 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:37.771942 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:37.771953 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:37.771963 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:37.771974 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:37.771984 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:37.771995 | orchestrator | 2026-04-05 00:35:37.772006 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-05 00:35:37.772036 | orchestrator | Sunday 05 April 2026 00:35:26 +0000 (0:00:01.434) 0:06:36.542 ********** 2026-04-05 00:35:37.772048 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:37.772059 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:37.772153 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:37.772169 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:37.772180 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:37.772190 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:37.772201 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:37.772212 | orchestrator | 2026-04-05 00:35:37.772222 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-05 00:35:37.772233 | orchestrator | Sunday 05 April 2026 00:35:27 +0000 (0:00:01.412) 0:06:37.955 ********** 2026-04-05 00:35:37.772244 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:37.772255 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:37.772265 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:37.772276 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:37.772287 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:37.772297 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:37.772308 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:37.772318 | orchestrator | 2026-04-05 00:35:37.772329 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-05 00:35:37.772340 | orchestrator | Sunday 05 April 2026 00:35:29 +0000 (0:00:01.281) 0:06:39.236 ********** 2026-04-05 00:35:37.772350 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:37.772361 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:37.772372 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:37.772382 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:37.772393 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:37.772404 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:37.772414 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:37.772425 | orchestrator | 2026-04-05 00:35:37.772436 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-05 00:35:37.772447 | orchestrator | Sunday 05 April 2026 00:35:30 +0000 (0:00:01.618) 0:06:40.855 ********** 2026-04-05 00:35:37.772457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:37.772484 | orchestrator | 2026-04-05 00:35:37.772496 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-05 00:35:37.772506 | orchestrator | Sunday 05 April 2026 00:35:31 +0000 (0:00:00.899) 0:06:41.755 ********** 2026-04-05 00:35:37.772517 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:37.772527 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:37.772538 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:37.772549 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:37.772559 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:37.772570 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:37.772580 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:37.772591 | orchestrator | 2026-04-05 00:35:37.772602 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-05 00:35:37.772612 | orchestrator | Sunday 05 April 2026 00:35:32 +0000 (0:00:01.334) 0:06:43.089 ********** 2026-04-05 00:35:37.772623 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:37.772633 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:37.772644 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:37.772655 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:37.772665 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:37.772676 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:37.772686 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:37.772697 | orchestrator | 2026-04-05 00:35:37.772708 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-05 00:35:37.772718 | orchestrator | Sunday 05 April 2026 00:35:34 +0000 (0:00:01.298) 0:06:44.387 ********** 2026-04-05 00:35:37.772729 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:37.772740 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:37.772750 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:37.772761 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:37.772771 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:37.772782 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:37.772793 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:37.772803 | orchestrator | 2026-04-05 00:35:37.772829 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-05 00:35:37.772840 | orchestrator | Sunday 05 April 2026 00:35:35 +0000 (0:00:01.111) 0:06:45.499 ********** 2026-04-05 00:35:37.772851 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:37.772861 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:37.772872 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:37.772882 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:37.772893 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:37.772903 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:37.772914 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:37.772924 | orchestrator | 2026-04-05 00:35:37.772935 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-05 00:35:37.772946 | orchestrator | Sunday 05 April 2026 00:35:36 +0000 (0:00:01.135) 0:06:46.635 ********** 2026-04-05 00:35:37.772957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:37.772967 | orchestrator | 2026-04-05 00:35:37.772979 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:35:37.772989 | orchestrator | Sunday 05 April 2026 00:35:37 +0000 (0:00:00.991) 0:06:47.626 ********** 2026-04-05 00:35:37.773000 | orchestrator | 2026-04-05 00:35:37.773011 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:35:37.773021 | orchestrator | Sunday 05 April 2026 00:35:37 +0000 (0:00:00.053) 0:06:47.680 ********** 2026-04-05 00:35:37.773032 | orchestrator | 2026-04-05 00:35:37.773043 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:35:37.773053 | orchestrator | Sunday 05 April 2026 00:35:37 +0000 (0:00:00.223) 0:06:47.904 ********** 2026-04-05 00:35:37.773090 | orchestrator | 2026-04-05 00:35:37.773102 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:35:37.773122 | orchestrator | Sunday 05 April 2026 00:35:37 +0000 (0:00:00.040) 0:06:47.945 ********** 2026-04-05 00:36:03.980923 | orchestrator | 2026-04-05 00:36:03.981022 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:36:03.981035 | orchestrator | Sunday 05 April 2026 00:35:37 +0000 (0:00:00.060) 0:06:48.005 ********** 2026-04-05 00:36:03.981043 | orchestrator | 2026-04-05 00:36:03.981052 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:36:03.981060 | orchestrator | Sunday 05 April 2026 00:35:37 +0000 (0:00:00.065) 0:06:48.071 ********** 2026-04-05 00:36:03.981111 | orchestrator | 2026-04-05 00:36:03.981120 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:36:03.981128 | orchestrator | Sunday 05 April 2026 00:35:37 +0000 (0:00:00.041) 0:06:48.112 ********** 2026-04-05 00:36:03.981136 | orchestrator | 2026-04-05 00:36:03.981145 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 00:36:03.981153 | orchestrator | Sunday 05 April 2026 00:35:37 +0000 (0:00:00.042) 0:06:48.155 ********** 2026-04-05 00:36:03.981161 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:03.981170 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:03.981178 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:03.981186 | orchestrator | 2026-04-05 00:36:03.981194 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-05 00:36:03.981202 | orchestrator | Sunday 05 April 2026 00:35:39 +0000 (0:00:01.225) 0:06:49.380 ********** 2026-04-05 00:36:03.981210 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:03.981219 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:03.981227 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:03.981235 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:03.981243 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:03.981251 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:03.981259 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:03.981267 | orchestrator | 2026-04-05 00:36:03.981275 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-05 00:36:03.981283 | orchestrator | Sunday 05 April 2026 00:35:40 +0000 (0:00:01.348) 0:06:50.728 ********** 2026-04-05 00:36:03.981291 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:03.981299 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:03.981307 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:03.981314 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:03.981322 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:03.981330 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:03.981338 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:03.981346 | orchestrator | 2026-04-05 00:36:03.981353 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-05 00:36:03.981361 | orchestrator | Sunday 05 April 2026 00:35:41 +0000 (0:00:01.270) 0:06:51.999 ********** 2026-04-05 00:36:03.981369 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:03.981377 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:03.981385 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:03.981393 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:03.981401 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:03.981408 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:03.981416 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:03.981424 | orchestrator | 2026-04-05 00:36:03.981432 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-05 00:36:03.981440 | orchestrator | Sunday 05 April 2026 00:35:44 +0000 (0:00:02.582) 0:06:54.582 ********** 2026-04-05 00:36:03.981448 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:03.981456 | orchestrator | 2026-04-05 00:36:03.981464 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-05 00:36:03.981473 | orchestrator | Sunday 05 April 2026 00:35:44 +0000 (0:00:00.123) 0:06:54.705 ********** 2026-04-05 00:36:03.981507 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:03.981517 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:03.981527 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:03.981536 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:03.981546 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:03.981556 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:03.981565 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:03.981574 | orchestrator | 2026-04-05 00:36:03.981596 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-05 00:36:03.981607 | orchestrator | Sunday 05 April 2026 00:35:45 +0000 (0:00:01.214) 0:06:55.920 ********** 2026-04-05 00:36:03.981616 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:03.981626 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:03.981635 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:03.981645 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:03.981652 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:03.981660 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:03.981668 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:03.981676 | orchestrator | 2026-04-05 00:36:03.981683 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-05 00:36:03.981691 | orchestrator | Sunday 05 April 2026 00:35:46 +0000 (0:00:00.549) 0:06:56.470 ********** 2026-04-05 00:36:03.981700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:03.981710 | orchestrator | 2026-04-05 00:36:03.981718 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-05 00:36:03.981726 | orchestrator | Sunday 05 April 2026 00:35:47 +0000 (0:00:01.005) 0:06:57.475 ********** 2026-04-05 00:36:03.981734 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:03.981742 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:03.981750 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:03.981757 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:03.981765 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:03.981773 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:03.981780 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:03.981788 | orchestrator | 2026-04-05 00:36:03.981796 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-05 00:36:03.981804 | orchestrator | Sunday 05 April 2026 00:35:48 +0000 (0:00:01.094) 0:06:58.570 ********** 2026-04-05 00:36:03.981812 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-05 00:36:03.981835 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-05 00:36:03.981844 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-05 00:36:03.981852 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-05 00:36:03.981859 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-05 00:36:03.981867 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-05 00:36:03.981875 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-05 00:36:03.981883 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-05 00:36:03.981891 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-05 00:36:03.981899 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-05 00:36:03.981906 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-05 00:36:03.981914 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-05 00:36:03.981922 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-05 00:36:03.981930 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-05 00:36:03.981938 | orchestrator | 2026-04-05 00:36:03.981945 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-05 00:36:03.981960 | orchestrator | Sunday 05 April 2026 00:35:50 +0000 (0:00:02.515) 0:07:01.086 ********** 2026-04-05 00:36:03.981967 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:03.981975 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:03.981983 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:03.981991 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:03.981999 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:03.982006 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:03.982064 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:03.982091 | orchestrator | 2026-04-05 00:36:03.982099 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-05 00:36:03.982107 | orchestrator | Sunday 05 April 2026 00:35:51 +0000 (0:00:00.516) 0:07:01.603 ********** 2026-04-05 00:36:03.982116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:03.982125 | orchestrator | 2026-04-05 00:36:03.982133 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-05 00:36:03.982141 | orchestrator | Sunday 05 April 2026 00:35:52 +0000 (0:00:01.046) 0:07:02.649 ********** 2026-04-05 00:36:03.982149 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:03.982157 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:03.982165 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:03.982172 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:03.982180 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:03.982188 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:03.982196 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:03.982203 | orchestrator | 2026-04-05 00:36:03.982211 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-05 00:36:03.982219 | orchestrator | Sunday 05 April 2026 00:35:53 +0000 (0:00:00.862) 0:07:03.511 ********** 2026-04-05 00:36:03.982227 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:03.982235 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:03.982242 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:03.982250 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:03.982258 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:03.982266 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:03.982273 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:03.982281 | orchestrator | 2026-04-05 00:36:03.982289 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-05 00:36:03.982297 | orchestrator | Sunday 05 April 2026 00:35:54 +0000 (0:00:00.844) 0:07:04.356 ********** 2026-04-05 00:36:03.982305 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:03.982317 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:03.982325 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:03.982333 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:03.982341 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:03.982349 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:03.982356 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:03.982364 | orchestrator | 2026-04-05 00:36:03.982372 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-05 00:36:03.982380 | orchestrator | Sunday 05 April 2026 00:35:54 +0000 (0:00:00.520) 0:07:04.877 ********** 2026-04-05 00:36:03.982388 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:03.982395 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:03.982403 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:03.982411 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:03.982419 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:03.982426 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:03.982434 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:03.982442 | orchestrator | 2026-04-05 00:36:03.982450 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-05 00:36:03.982458 | orchestrator | Sunday 05 April 2026 00:35:56 +0000 (0:00:01.432) 0:07:06.310 ********** 2026-04-05 00:36:03.982471 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:03.982479 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:03.982487 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:03.982495 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:03.982503 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:03.982511 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:03.982518 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:03.982526 | orchestrator | 2026-04-05 00:36:03.982534 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-05 00:36:03.982542 | orchestrator | Sunday 05 April 2026 00:35:56 +0000 (0:00:00.734) 0:07:07.045 ********** 2026-04-05 00:36:03.982550 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:03.982558 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:03.982566 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:03.982574 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:03.982582 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:03.982590 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:03.982603 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:37.032948 | orchestrator | 2026-04-05 00:36:37.033085 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-05 00:36:37.033174 | orchestrator | Sunday 05 April 2026 00:36:04 +0000 (0:00:07.184) 0:07:14.230 ********** 2026-04-05 00:36:37.033189 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:37.033201 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:37.033213 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:37.033224 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:37.033235 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:37.033246 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:37.033256 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:37.033267 | orchestrator | 2026-04-05 00:36:37.033278 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-05 00:36:37.033290 | orchestrator | Sunday 05 April 2026 00:36:05 +0000 (0:00:01.337) 0:07:15.567 ********** 2026-04-05 00:36:37.033300 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:37.033311 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:37.033322 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:37.033333 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:37.033344 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:37.033354 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:37.033365 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:37.033376 | orchestrator | 2026-04-05 00:36:37.033387 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-05 00:36:37.033400 | orchestrator | Sunday 05 April 2026 00:36:07 +0000 (0:00:01.733) 0:07:17.301 ********** 2026-04-05 00:36:37.033413 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:37.033426 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:37.033438 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:37.033451 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:37.033464 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:37.033477 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:37.033489 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:37.033499 | orchestrator | 2026-04-05 00:36:37.033510 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 00:36:37.033521 | orchestrator | Sunday 05 April 2026 00:36:08 +0000 (0:00:01.838) 0:07:19.139 ********** 2026-04-05 00:36:37.033532 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:37.033543 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:37.033553 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:37.033564 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:37.033575 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:37.033608 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:37.033620 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:37.033631 | orchestrator | 2026-04-05 00:36:37.033642 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 00:36:37.033677 | orchestrator | Sunday 05 April 2026 00:36:09 +0000 (0:00:00.916) 0:07:20.055 ********** 2026-04-05 00:36:37.033689 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:37.033699 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:37.033710 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:37.033721 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:37.033732 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:37.033743 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:37.033753 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:37.033764 | orchestrator | 2026-04-05 00:36:37.033775 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-05 00:36:37.033786 | orchestrator | Sunday 05 April 2026 00:36:10 +0000 (0:00:00.851) 0:07:20.907 ********** 2026-04-05 00:36:37.033796 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:37.033807 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:37.033817 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:37.033828 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:37.033838 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:37.033849 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:37.033859 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:37.033870 | orchestrator | 2026-04-05 00:36:37.033881 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-05 00:36:37.033892 | orchestrator | Sunday 05 April 2026 00:36:11 +0000 (0:00:00.703) 0:07:21.610 ********** 2026-04-05 00:36:37.033902 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:37.033913 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:37.033924 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:37.033934 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:37.033945 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:37.033955 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:37.033966 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:37.033976 | orchestrator | 2026-04-05 00:36:37.033987 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-05 00:36:37.033998 | orchestrator | Sunday 05 April 2026 00:36:11 +0000 (0:00:00.561) 0:07:22.172 ********** 2026-04-05 00:36:37.034008 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:37.034149 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:37.034170 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:37.034214 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:37.034234 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:37.034245 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:37.034256 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:37.034266 | orchestrator | 2026-04-05 00:36:37.034278 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-05 00:36:37.034289 | orchestrator | Sunday 05 April 2026 00:36:12 +0000 (0:00:00.569) 0:07:22.742 ********** 2026-04-05 00:36:37.034299 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:37.034310 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:37.034321 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:37.034331 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:37.034342 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:37.034352 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:37.034363 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:37.034373 | orchestrator | 2026-04-05 00:36:37.034384 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-05 00:36:37.034395 | orchestrator | Sunday 05 April 2026 00:36:13 +0000 (0:00:00.535) 0:07:23.277 ********** 2026-04-05 00:36:37.034406 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:37.034417 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:37.034427 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:37.034438 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:37.034448 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:37.034459 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:37.034470 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:37.034480 | orchestrator | 2026-04-05 00:36:37.034523 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-05 00:36:37.034535 | orchestrator | Sunday 05 April 2026 00:36:18 +0000 (0:00:05.568) 0:07:28.845 ********** 2026-04-05 00:36:37.034546 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:37.034557 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:37.034568 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:37.034579 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:37.034589 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:37.034600 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:37.034611 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:37.034622 | orchestrator | 2026-04-05 00:36:37.034632 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-05 00:36:37.034643 | orchestrator | Sunday 05 April 2026 00:36:19 +0000 (0:00:00.764) 0:07:29.609 ********** 2026-04-05 00:36:37.034655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:37.034668 | orchestrator | 2026-04-05 00:36:37.034679 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-05 00:36:37.034690 | orchestrator | Sunday 05 April 2026 00:36:20 +0000 (0:00:00.849) 0:07:30.459 ********** 2026-04-05 00:36:37.034700 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:37.034711 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:37.034722 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:37.034732 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:37.034743 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:37.034754 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:37.034764 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:37.034775 | orchestrator | 2026-04-05 00:36:37.034786 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-05 00:36:37.034797 | orchestrator | Sunday 05 April 2026 00:36:22 +0000 (0:00:01.894) 0:07:32.353 ********** 2026-04-05 00:36:37.034807 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:37.034818 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:37.034829 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:37.034839 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:37.034850 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:37.034861 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:37.034871 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:37.034882 | orchestrator | 2026-04-05 00:36:37.034893 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-05 00:36:37.034904 | orchestrator | Sunday 05 April 2026 00:36:23 +0000 (0:00:01.332) 0:07:33.686 ********** 2026-04-05 00:36:37.034915 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:37.034925 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:37.034936 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:37.034947 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:37.034957 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:37.034968 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:37.034979 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:37.034990 | orchestrator | 2026-04-05 00:36:37.035000 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-05 00:36:37.035011 | orchestrator | Sunday 05 April 2026 00:36:25 +0000 (0:00:01.685) 0:07:35.372 ********** 2026-04-05 00:36:37.035022 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:37.035034 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:37.035045 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:37.035062 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:37.035081 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:37.035092 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:37.035123 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:37.035134 | orchestrator | 2026-04-05 00:36:37.035145 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-05 00:36:37.035156 | orchestrator | Sunday 05 April 2026 00:36:26 +0000 (0:00:01.760) 0:07:37.132 ********** 2026-04-05 00:36:37.035167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:37.035178 | orchestrator | 2026-04-05 00:36:37.035188 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-05 00:36:37.035199 | orchestrator | Sunday 05 April 2026 00:36:28 +0000 (0:00:01.120) 0:07:38.253 ********** 2026-04-05 00:36:37.035210 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:37.035220 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:37.035231 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:37.035242 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:37.035252 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:37.035263 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:37.035274 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:37.035284 | orchestrator | 2026-04-05 00:36:37.035302 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-05 00:37:07.521410 | orchestrator | Sunday 05 April 2026 00:36:37 +0000 (0:00:08.951) 0:07:47.205 ********** 2026-04-05 00:37:07.521520 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:07.521537 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:37:07.521549 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:37:07.521560 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:37:07.521571 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:37:07.521581 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:37:07.521592 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:37:07.521603 | orchestrator | 2026-04-05 00:37:07.521615 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-05 00:37:07.521626 | orchestrator | Sunday 05 April 2026 00:36:38 +0000 (0:00:01.906) 0:07:49.111 ********** 2026-04-05 00:37:07.521637 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:37:07.521648 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:37:07.521659 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:37:07.521670 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:37:07.521681 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:37:07.521692 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:37:07.521702 | orchestrator | 2026-04-05 00:37:07.521713 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-05 00:37:07.521725 | orchestrator | Sunday 05 April 2026 00:36:40 +0000 (0:00:01.536) 0:07:50.647 ********** 2026-04-05 00:37:07.521735 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:07.521747 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:07.521757 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:07.521768 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:07.521779 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:07.521789 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:07.521800 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:07.521811 | orchestrator | 2026-04-05 00:37:07.521822 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-05 00:37:07.521832 | orchestrator | 2026-04-05 00:37:07.521843 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-05 00:37:07.521882 | orchestrator | Sunday 05 April 2026 00:36:41 +0000 (0:00:01.262) 0:07:51.910 ********** 2026-04-05 00:37:07.521893 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:37:07.521904 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:37:07.521915 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:37:07.521925 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:37:07.521936 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:37:07.521946 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:37:07.521957 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:37:07.521967 | orchestrator | 2026-04-05 00:37:07.521978 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-05 00:37:07.521989 | orchestrator | 2026-04-05 00:37:07.522000 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-05 00:37:07.522010 | orchestrator | Sunday 05 April 2026 00:36:42 +0000 (0:00:00.525) 0:07:52.435 ********** 2026-04-05 00:37:07.522085 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:07.522097 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:07.522108 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:07.522119 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:07.522159 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:07.522179 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:07.522198 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:07.522217 | orchestrator | 2026-04-05 00:37:07.522236 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-05 00:37:07.522248 | orchestrator | Sunday 05 April 2026 00:36:43 +0000 (0:00:01.400) 0:07:53.836 ********** 2026-04-05 00:37:07.522259 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:07.522269 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:37:07.522280 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:37:07.522291 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:37:07.522301 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:37:07.522312 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:37:07.522322 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:37:07.522333 | orchestrator | 2026-04-05 00:37:07.522344 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-05 00:37:07.522355 | orchestrator | Sunday 05 April 2026 00:36:45 +0000 (0:00:01.666) 0:07:55.502 ********** 2026-04-05 00:37:07.522380 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:37:07.522391 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:37:07.522402 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:37:07.522412 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:37:07.522423 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:37:07.522434 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:37:07.522444 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:37:07.522455 | orchestrator | 2026-04-05 00:37:07.522465 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-05 00:37:07.522476 | orchestrator | Sunday 05 April 2026 00:36:45 +0000 (0:00:00.526) 0:07:56.029 ********** 2026-04-05 00:37:07.522488 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:37:07.522500 | orchestrator | 2026-04-05 00:37:07.522511 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-05 00:37:07.522522 | orchestrator | Sunday 05 April 2026 00:36:46 +0000 (0:00:00.967) 0:07:56.996 ********** 2026-04-05 00:37:07.522534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:37:07.522547 | orchestrator | 2026-04-05 00:37:07.522558 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-05 00:37:07.522568 | orchestrator | Sunday 05 April 2026 00:36:47 +0000 (0:00:01.107) 0:07:58.104 ********** 2026-04-05 00:37:07.522590 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:07.522601 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:07.522611 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:07.522622 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:07.522633 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:07.522643 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:07.522654 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:07.522664 | orchestrator | 2026-04-05 00:37:07.522692 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-05 00:37:07.522704 | orchestrator | Sunday 05 April 2026 00:36:56 +0000 (0:00:08.239) 0:08:06.344 ********** 2026-04-05 00:37:07.522715 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:07.522725 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:07.522736 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:07.522746 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:07.522757 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:07.522767 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:07.522778 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:07.522789 | orchestrator | 2026-04-05 00:37:07.522800 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-05 00:37:07.522811 | orchestrator | Sunday 05 April 2026 00:36:57 +0000 (0:00:00.856) 0:08:07.201 ********** 2026-04-05 00:37:07.522821 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:07.522832 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:07.522842 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:07.522853 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:07.522863 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:07.522874 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:07.522884 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:07.522895 | orchestrator | 2026-04-05 00:37:07.522906 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-05 00:37:07.522917 | orchestrator | Sunday 05 April 2026 00:36:58 +0000 (0:00:01.353) 0:08:08.554 ********** 2026-04-05 00:37:07.522927 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:07.522938 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:07.522948 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:07.522959 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:07.522969 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:07.522980 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:07.522990 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:07.523001 | orchestrator | 2026-04-05 00:37:07.523012 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-05 00:37:07.523023 | orchestrator | Sunday 05 April 2026 00:37:00 +0000 (0:00:01.966) 0:08:10.521 ********** 2026-04-05 00:37:07.523033 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:07.523043 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:07.523054 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:07.523065 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:07.523075 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:07.523086 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:07.523096 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:07.523107 | orchestrator | 2026-04-05 00:37:07.523118 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-05 00:37:07.523159 | orchestrator | Sunday 05 April 2026 00:37:01 +0000 (0:00:01.176) 0:08:11.697 ********** 2026-04-05 00:37:07.523170 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:07.523181 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:07.523192 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:07.523202 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:07.523213 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:07.523224 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:07.523234 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:07.523245 | orchestrator | 2026-04-05 00:37:07.523263 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-05 00:37:07.523274 | orchestrator | 2026-04-05 00:37:07.523284 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-05 00:37:07.523295 | orchestrator | Sunday 05 April 2026 00:37:02 +0000 (0:00:01.153) 0:08:12.850 ********** 2026-04-05 00:37:07.523306 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:37:07.523317 | orchestrator | 2026-04-05 00:37:07.523328 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-05 00:37:07.523344 | orchestrator | Sunday 05 April 2026 00:37:03 +0000 (0:00:01.014) 0:08:13.865 ********** 2026-04-05 00:37:07.523355 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:07.523365 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:37:07.523376 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:37:07.523387 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:37:07.523397 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:37:07.523408 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:37:07.523418 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:37:07.523429 | orchestrator | 2026-04-05 00:37:07.523439 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-05 00:37:07.523450 | orchestrator | Sunday 05 April 2026 00:37:04 +0000 (0:00:00.831) 0:08:14.697 ********** 2026-04-05 00:37:07.523461 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:07.523471 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:07.523482 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:07.523493 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:07.523503 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:07.523514 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:07.523524 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:07.523535 | orchestrator | 2026-04-05 00:37:07.523545 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-05 00:37:07.523556 | orchestrator | Sunday 05 April 2026 00:37:05 +0000 (0:00:01.298) 0:08:15.996 ********** 2026-04-05 00:37:07.523567 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:37:07.523578 | orchestrator | 2026-04-05 00:37:07.523589 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-05 00:37:07.523599 | orchestrator | Sunday 05 April 2026 00:37:06 +0000 (0:00:00.874) 0:08:16.871 ********** 2026-04-05 00:37:07.523610 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:07.523621 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:37:07.523631 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:37:07.523642 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:37:07.523652 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:37:07.523663 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:37:07.523673 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:37:07.523684 | orchestrator | 2026-04-05 00:37:07.523702 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-05 00:37:09.119749 | orchestrator | Sunday 05 April 2026 00:37:07 +0000 (0:00:00.819) 0:08:17.690 ********** 2026-04-05 00:37:09.119875 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:09.119902 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:09.119914 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:09.119925 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:09.119936 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:09.119946 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:09.119957 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:09.119968 | orchestrator | 2026-04-05 00:37:09.119979 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:37:09.119992 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-05 00:37:09.120032 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:37:09.120044 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-05 00:37:09.120055 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-05 00:37:09.120066 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:37:09.120076 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:37:09.120087 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:37:09.120098 | orchestrator | 2026-04-05 00:37:09.120109 | orchestrator | 2026-04-05 00:37:09.120119 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:37:09.120180 | orchestrator | Sunday 05 April 2026 00:37:08 +0000 (0:00:01.259) 0:08:18.949 ********** 2026-04-05 00:37:09.120194 | orchestrator | =============================================================================== 2026-04-05 00:37:09.120204 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.06s 2026-04-05 00:37:09.120215 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.77s 2026-04-05 00:37:09.120225 | orchestrator | osism.commons.packages : Download required packages -------------------- 32.08s 2026-04-05 00:37:09.120236 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.90s 2026-04-05 00:37:09.120247 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.50s 2026-04-05 00:37:09.120257 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.18s 2026-04-05 00:37:09.120269 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.78s 2026-04-05 00:37:09.120279 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.89s 2026-04-05 00:37:09.120292 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.30s 2026-04-05 00:37:09.120305 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.95s 2026-04-05 00:37:09.120333 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.93s 2026-04-05 00:37:09.120346 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.87s 2026-04-05 00:37:09.120359 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.78s 2026-04-05 00:37:09.120371 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.24s 2026-04-05 00:37:09.120384 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.79s 2026-04-05 00:37:09.120396 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.18s 2026-04-05 00:37:09.120409 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.52s 2026-04-05 00:37:09.120421 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.47s 2026-04-05 00:37:09.120433 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.18s 2026-04-05 00:37:09.120446 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.00s 2026-04-05 00:37:09.312584 | orchestrator | + osism apply fail2ban 2026-04-05 00:37:21.062274 | orchestrator | 2026-04-05 00:37:21 | INFO  | Prepare task for execution of fail2ban. 2026-04-05 00:37:21.152937 | orchestrator | 2026-04-05 00:37:21 | INFO  | Task ffe0709c-fe95-45bd-8714-f2b870cf2a35 (fail2ban) was prepared for execution. 2026-04-05 00:37:21.153055 | orchestrator | 2026-04-05 00:37:21 | INFO  | It takes a moment until task ffe0709c-fe95-45bd-8714-f2b870cf2a35 (fail2ban) has been started and output is visible here. 2026-04-05 00:37:42.414981 | orchestrator | 2026-04-05 00:37:42.415092 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-05 00:37:42.415109 | orchestrator | 2026-04-05 00:37:42.415121 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-05 00:37:42.415132 | orchestrator | Sunday 05 April 2026 00:37:24 +0000 (0:00:00.345) 0:00:00.345 ********** 2026-04-05 00:37:42.415144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:37:42.415157 | orchestrator | 2026-04-05 00:37:42.415226 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-05 00:37:42.415239 | orchestrator | Sunday 05 April 2026 00:37:26 +0000 (0:00:01.293) 0:00:01.638 ********** 2026-04-05 00:37:42.415251 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:42.415263 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:42.415274 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:42.415285 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:42.415295 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:42.415306 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:42.415317 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:42.415328 | orchestrator | 2026-04-05 00:37:42.415339 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-05 00:37:42.415350 | orchestrator | Sunday 05 April 2026 00:37:37 +0000 (0:00:11.180) 0:00:12.819 ********** 2026-04-05 00:37:42.415360 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:42.415371 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:42.415382 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:42.415392 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:42.415403 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:42.415413 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:42.415424 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:42.415435 | orchestrator | 2026-04-05 00:37:42.415445 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-05 00:37:42.415456 | orchestrator | Sunday 05 April 2026 00:37:39 +0000 (0:00:01.685) 0:00:14.504 ********** 2026-04-05 00:37:42.415467 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:42.415479 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:37:42.415490 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:37:42.415500 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:37:42.415511 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:37:42.415521 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:37:42.415535 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:37:42.415547 | orchestrator | 2026-04-05 00:37:42.415560 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-05 00:37:42.415574 | orchestrator | Sunday 05 April 2026 00:37:40 +0000 (0:00:01.254) 0:00:15.758 ********** 2026-04-05 00:37:42.415587 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:42.415599 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:42.415611 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:42.415624 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:42.415636 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:42.415648 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:42.415660 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:42.415673 | orchestrator | 2026-04-05 00:37:42.415685 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:37:42.415699 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:42.415710 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:42.415747 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:42.415759 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:42.415770 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:42.415781 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:42.415791 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:42.415802 | orchestrator | 2026-04-05 00:37:42.415813 | orchestrator | 2026-04-05 00:37:42.415823 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:37:42.415834 | orchestrator | Sunday 05 April 2026 00:37:42 +0000 (0:00:01.705) 0:00:17.464 ********** 2026-04-05 00:37:42.415844 | orchestrator | =============================================================================== 2026-04-05 00:37:42.415855 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.18s 2026-04-05 00:37:42.415865 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.71s 2026-04-05 00:37:42.415876 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.69s 2026-04-05 00:37:42.415887 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.29s 2026-04-05 00:37:42.415897 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.25s 2026-04-05 00:37:42.621096 | orchestrator | + osism apply network 2026-04-05 00:37:53.935467 | orchestrator | 2026-04-05 00:37:53 | INFO  | Prepare task for execution of network. 2026-04-05 00:37:54.029503 | orchestrator | 2026-04-05 00:37:54 | INFO  | Task 98911a66-2b74-4d9a-81d1-fb2604182fda (network) was prepared for execution. 2026-04-05 00:37:54.029618 | orchestrator | 2026-04-05 00:37:54 | INFO  | It takes a moment until task 98911a66-2b74-4d9a-81d1-fb2604182fda (network) has been started and output is visible here. 2026-04-05 00:38:24.565206 | orchestrator | 2026-04-05 00:38:24.565351 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-05 00:38:24.565368 | orchestrator | 2026-04-05 00:38:24.565381 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-05 00:38:24.565392 | orchestrator | Sunday 05 April 2026 00:37:57 +0000 (0:00:00.354) 0:00:00.354 ********** 2026-04-05 00:38:24.565403 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:24.565415 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:24.565426 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:24.565436 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:24.565447 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:24.565458 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:24.565469 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:24.565479 | orchestrator | 2026-04-05 00:38:24.565490 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-05 00:38:24.565501 | orchestrator | Sunday 05 April 2026 00:37:58 +0000 (0:00:00.703) 0:00:01.057 ********** 2026-04-05 00:38:24.565514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:38:24.565527 | orchestrator | 2026-04-05 00:38:24.565538 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-05 00:38:24.565549 | orchestrator | Sunday 05 April 2026 00:37:59 +0000 (0:00:01.273) 0:00:02.331 ********** 2026-04-05 00:38:24.565560 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:24.565596 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:24.565608 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:24.565619 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:24.565629 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:24.565639 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:24.565650 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:24.565660 | orchestrator | 2026-04-05 00:38:24.565671 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-05 00:38:24.565682 | orchestrator | Sunday 05 April 2026 00:38:02 +0000 (0:00:02.620) 0:00:04.951 ********** 2026-04-05 00:38:24.565692 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:24.565703 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:24.565713 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:24.565724 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:24.565734 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:24.565748 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:24.565767 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:24.565784 | orchestrator | 2026-04-05 00:38:24.565802 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-05 00:38:24.565841 | orchestrator | Sunday 05 April 2026 00:38:03 +0000 (0:00:01.554) 0:00:06.506 ********** 2026-04-05 00:38:24.565879 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-05 00:38:24.565901 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-05 00:38:24.565920 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-05 00:38:24.565939 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-05 00:38:24.565959 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-05 00:38:24.565978 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-05 00:38:24.565998 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-05 00:38:24.566014 | orchestrator | 2026-04-05 00:38:24.566104 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-05 00:38:24.566117 | orchestrator | Sunday 05 April 2026 00:38:04 +0000 (0:00:01.329) 0:00:07.835 ********** 2026-04-05 00:38:24.566128 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:24.566139 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:24.566150 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:24.566160 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:24.566171 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:24.566182 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:24.566197 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:24.566229 | orchestrator | 2026-04-05 00:38:24.566243 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-05 00:38:24.566255 | orchestrator | Sunday 05 April 2026 00:38:05 +0000 (0:00:00.805) 0:00:08.641 ********** 2026-04-05 00:38:24.566266 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:24.566276 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:24.566286 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:24.566297 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:24.566307 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:24.566317 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:24.566328 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:24.566338 | orchestrator | 2026-04-05 00:38:24.566349 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-05 00:38:24.566360 | orchestrator | Sunday 05 April 2026 00:38:06 +0000 (0:00:00.877) 0:00:09.519 ********** 2026-04-05 00:38:24.566370 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:24.566381 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:24.566391 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:24.566401 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:24.566412 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:24.566422 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:24.566433 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:24.566455 | orchestrator | 2026-04-05 00:38:24.566466 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-05 00:38:24.566476 | orchestrator | Sunday 05 April 2026 00:38:07 +0000 (0:00:00.876) 0:00:10.396 ********** 2026-04-05 00:38:24.566487 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 00:38:24.566497 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:38:24.566508 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:38:24.566518 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 00:38:24.566528 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 00:38:24.566539 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 00:38:24.566550 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 00:38:24.566560 | orchestrator | 2026-04-05 00:38:24.566592 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-05 00:38:24.566604 | orchestrator | Sunday 05 April 2026 00:38:11 +0000 (0:00:03.650) 0:00:14.046 ********** 2026-04-05 00:38:24.566614 | orchestrator | changed: [testbed-manager] 2026-04-05 00:38:24.566625 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:38:24.566635 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:38:24.566646 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:38:24.566656 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:38:24.566666 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:38:24.566677 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:38:24.566687 | orchestrator | 2026-04-05 00:38:24.566698 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-05 00:38:24.566709 | orchestrator | Sunday 05 April 2026 00:38:12 +0000 (0:00:01.770) 0:00:15.817 ********** 2026-04-05 00:38:24.566719 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:38:24.566729 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 00:38:24.566740 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:38:24.566750 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 00:38:24.566761 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 00:38:24.566771 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 00:38:24.566782 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 00:38:24.566792 | orchestrator | 2026-04-05 00:38:24.566803 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-05 00:38:24.566813 | orchestrator | Sunday 05 April 2026 00:38:14 +0000 (0:00:01.910) 0:00:17.728 ********** 2026-04-05 00:38:24.566823 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:24.566834 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:24.566845 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:24.566855 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:24.566865 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:24.566876 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:24.566886 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:24.566897 | orchestrator | 2026-04-05 00:38:24.566907 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-05 00:38:24.566918 | orchestrator | Sunday 05 April 2026 00:38:16 +0000 (0:00:01.138) 0:00:18.866 ********** 2026-04-05 00:38:24.566928 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:24.566939 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:24.566949 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:24.566960 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:24.566970 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:24.566985 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:24.567003 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:24.567021 | orchestrator | 2026-04-05 00:38:24.567039 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-05 00:38:24.567057 | orchestrator | Sunday 05 April 2026 00:38:16 +0000 (0:00:00.650) 0:00:19.516 ********** 2026-04-05 00:38:24.567076 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:24.567094 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:24.567114 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:24.567136 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:24.567147 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:24.567157 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:24.567167 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:24.567178 | orchestrator | 2026-04-05 00:38:24.567188 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-05 00:38:24.567199 | orchestrator | Sunday 05 April 2026 00:38:18 +0000 (0:00:02.176) 0:00:21.692 ********** 2026-04-05 00:38:24.567260 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:24.567274 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:24.567284 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:24.567295 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:24.567305 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:24.567316 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:24.567326 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-05 00:38:24.567338 | orchestrator | 2026-04-05 00:38:24.567355 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-05 00:38:24.567366 | orchestrator | Sunday 05 April 2026 00:38:19 +0000 (0:00:01.138) 0:00:22.831 ********** 2026-04-05 00:38:24.567377 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:24.567387 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:38:24.567398 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:38:24.567408 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:38:24.567419 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:38:24.567429 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:38:24.567439 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:38:24.567450 | orchestrator | 2026-04-05 00:38:24.567460 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-05 00:38:24.567471 | orchestrator | Sunday 05 April 2026 00:38:21 +0000 (0:00:01.608) 0:00:24.439 ********** 2026-04-05 00:38:24.567482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:38:24.567495 | orchestrator | 2026-04-05 00:38:24.567506 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-05 00:38:24.567516 | orchestrator | Sunday 05 April 2026 00:38:22 +0000 (0:00:01.249) 0:00:25.689 ********** 2026-04-05 00:38:24.567527 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:24.567537 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:24.567547 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:24.567558 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:24.567568 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:24.567579 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:24.567590 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:24.567600 | orchestrator | 2026-04-05 00:38:24.567611 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-05 00:38:24.567621 | orchestrator | Sunday 05 April 2026 00:38:24 +0000 (0:00:01.168) 0:00:26.858 ********** 2026-04-05 00:38:24.567632 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:24.567643 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:24.567653 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:24.567663 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:24.567674 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:24.567694 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:41.679561 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:41.679671 | orchestrator | 2026-04-05 00:38:41.679688 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-05 00:38:41.679702 | orchestrator | Sunday 05 April 2026 00:38:24 +0000 (0:00:00.668) 0:00:27.526 ********** 2026-04-05 00:38:41.679713 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:41.679725 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:41.679761 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:41.679773 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:41.679784 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:41.679795 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:41.679806 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:41.679816 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:41.679827 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:41.679838 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:41.679848 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:41.679859 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:41.679869 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:41.679880 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:41.679891 | orchestrator | 2026-04-05 00:38:41.679902 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-05 00:38:41.679912 | orchestrator | Sunday 05 April 2026 00:38:25 +0000 (0:00:01.291) 0:00:28.817 ********** 2026-04-05 00:38:41.679926 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:41.679947 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:41.679965 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:41.679984 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:41.680002 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:41.680021 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:41.680039 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:41.680057 | orchestrator | 2026-04-05 00:38:41.680075 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-05 00:38:41.680096 | orchestrator | Sunday 05 April 2026 00:38:26 +0000 (0:00:00.694) 0:00:29.512 ********** 2026-04-05 00:38:41.680119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-04-05 00:38:41.680143 | orchestrator | 2026-04-05 00:38:41.680161 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-05 00:38:41.680175 | orchestrator | Sunday 05 April 2026 00:38:30 +0000 (0:00:04.053) 0:00:33.566 ********** 2026-04-05 00:38:41.680190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:41.680219 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-05 00:38:41.680271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-05 00:38:41.680286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:41.680299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:41.680324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:41.680358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:41.680372 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-05 00:38:41.680385 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:41.680398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-05 00:38:41.680410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-05 00:38:41.680423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-05 00:38:41.680435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-05 00:38:41.680448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-05 00:38:41.680459 | orchestrator | 2026-04-05 00:38:41.680470 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-05 00:38:41.680481 | orchestrator | Sunday 05 April 2026 00:38:36 +0000 (0:00:06.100) 0:00:39.667 ********** 2026-04-05 00:38:41.680492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:41.680508 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-05 00:38:41.680520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:41.680531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:41.680549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:41.680560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-05 00:38:41.680572 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-05 00:38:41.680590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:54.593162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:54.593342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-05 00:38:54.593360 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-05 00:38:54.593370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-05 00:38:54.593379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-05 00:38:54.593388 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-05 00:38:54.593398 | orchestrator | 2026-04-05 00:38:54.593408 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-05 00:38:54.593418 | orchestrator | Sunday 05 April 2026 00:38:42 +0000 (0:00:05.836) 0:00:45.504 ********** 2026-04-05 00:38:54.593428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:38:54.593437 | orchestrator | 2026-04-05 00:38:54.593446 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-05 00:38:54.593455 | orchestrator | Sunday 05 April 2026 00:38:44 +0000 (0:00:01.503) 0:00:47.007 ********** 2026-04-05 00:38:54.593463 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:54.593474 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:54.593482 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:54.593514 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:54.593524 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:54.593532 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:54.593541 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:54.593549 | orchestrator | 2026-04-05 00:38:54.593571 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-05 00:38:54.593580 | orchestrator | Sunday 05 April 2026 00:38:45 +0000 (0:00:00.977) 0:00:47.985 ********** 2026-04-05 00:38:54.593589 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:54.593598 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:54.593607 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:54.593616 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:54.593624 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:54.593634 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:54.593642 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:54.593651 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:54.593660 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:54.593668 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:54.593677 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:54.593685 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:54.593693 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:54.593702 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:54.593710 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:54.593719 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:54.593727 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:54.593736 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:54.593759 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:54.593769 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:54.593778 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:54.593786 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:54.593795 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:54.593803 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:54.593812 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:54.593821 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:54.593829 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:54.593838 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:54.593846 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:54.593855 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:54.593864 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:54.593873 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:54.593881 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:54.593900 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:54.593909 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:54.593918 | orchestrator | 2026-04-05 00:38:54.593926 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-05 00:38:54.593935 | orchestrator | Sunday 05 April 2026 00:38:46 +0000 (0:00:01.000) 0:00:48.985 ********** 2026-04-05 00:38:54.593944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:38:54.593953 | orchestrator | 2026-04-05 00:38:54.593961 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-05 00:38:54.593970 | orchestrator | Sunday 05 April 2026 00:38:47 +0000 (0:00:01.339) 0:00:50.325 ********** 2026-04-05 00:38:54.593978 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:54.593987 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:54.593996 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:54.594005 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:54.594013 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:54.594073 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:54.594082 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:54.594090 | orchestrator | 2026-04-05 00:38:54.594099 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-05 00:38:54.594108 | orchestrator | Sunday 05 April 2026 00:38:48 +0000 (0:00:00.621) 0:00:50.947 ********** 2026-04-05 00:38:54.594117 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:54.594125 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:54.594133 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:54.594149 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:54.594158 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:54.594167 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:54.594180 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:54.594189 | orchestrator | 2026-04-05 00:38:54.594198 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-05 00:38:54.594206 | orchestrator | Sunday 05 April 2026 00:38:48 +0000 (0:00:00.838) 0:00:51.785 ********** 2026-04-05 00:38:54.594215 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:54.594224 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:54.594232 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:54.594261 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:54.594270 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:54.594279 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:54.594287 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:54.594296 | orchestrator | 2026-04-05 00:38:54.594305 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-05 00:38:54.594314 | orchestrator | Sunday 05 April 2026 00:38:49 +0000 (0:00:00.652) 0:00:52.438 ********** 2026-04-05 00:38:54.594322 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:54.594331 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:54.594339 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:54.594348 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:54.594357 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:54.594365 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:54.594374 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:54.594382 | orchestrator | 2026-04-05 00:38:54.594391 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-05 00:38:54.594400 | orchestrator | Sunday 05 April 2026 00:38:51 +0000 (0:00:01.744) 0:00:54.183 ********** 2026-04-05 00:38:54.594408 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:54.594417 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:54.594425 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:54.594434 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:54.594442 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:54.594458 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:54.594466 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:54.594475 | orchestrator | 2026-04-05 00:38:54.594483 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-05 00:38:54.594492 | orchestrator | Sunday 05 April 2026 00:38:52 +0000 (0:00:01.131) 0:00:55.314 ********** 2026-04-05 00:38:54.594501 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:54.594509 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:54.594518 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:54.594526 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:54.594534 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:54.594543 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:54.594551 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:54.594560 | orchestrator | 2026-04-05 00:38:54.594575 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-05 00:38:56.408547 | orchestrator | Sunday 05 April 2026 00:38:54 +0000 (0:00:02.107) 0:00:57.422 ********** 2026-04-05 00:38:56.408677 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:56.408705 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:56.408725 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:56.408744 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:56.408761 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:56.408779 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:56.408798 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:56.408817 | orchestrator | 2026-04-05 00:38:56.408836 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-05 00:38:56.408855 | orchestrator | Sunday 05 April 2026 00:38:55 +0000 (0:00:00.839) 0:00:58.261 ********** 2026-04-05 00:38:56.408874 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:56.408893 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:56.408913 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:56.408933 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:56.408953 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:56.408973 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:56.408994 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:56.409014 | orchestrator | 2026-04-05 00:38:56.409034 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:38:56.409056 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 00:38:56.409079 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:38:56.409100 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:38:56.409121 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:38:56.409142 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:38:56.409170 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:38:56.409192 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:38:56.409211 | orchestrator | 2026-04-05 00:38:56.409232 | orchestrator | 2026-04-05 00:38:56.409283 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:38:56.409303 | orchestrator | Sunday 05 April 2026 00:38:55 +0000 (0:00:00.538) 0:00:58.800 ********** 2026-04-05 00:38:56.409324 | orchestrator | =============================================================================== 2026-04-05 00:38:56.409382 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.10s 2026-04-05 00:38:56.409405 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.84s 2026-04-05 00:38:56.409425 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.05s 2026-04-05 00:38:56.409446 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.65s 2026-04-05 00:38:56.409465 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.62s 2026-04-05 00:38:56.409482 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.18s 2026-04-05 00:38:56.409500 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.11s 2026-04-05 00:38:56.409517 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.91s 2026-04-05 00:38:56.409536 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.77s 2026-04-05 00:38:56.409555 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.74s 2026-04-05 00:38:56.409575 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.61s 2026-04-05 00:38:56.409596 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.55s 2026-04-05 00:38:56.409616 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.50s 2026-04-05 00:38:56.409636 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.34s 2026-04-05 00:38:56.409657 | orchestrator | osism.commons.network : Create required directories --------------------- 1.33s 2026-04-05 00:38:56.409678 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.29s 2026-04-05 00:38:56.409698 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.27s 2026-04-05 00:38:56.409719 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.25s 2026-04-05 00:38:56.409739 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.17s 2026-04-05 00:38:56.409760 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.14s 2026-04-05 00:38:56.607196 | orchestrator | + osism apply wireguard 2026-04-05 00:39:07.929958 | orchestrator | 2026-04-05 00:39:07 | INFO  | Prepare task for execution of wireguard. 2026-04-05 00:39:08.009709 | orchestrator | 2026-04-05 00:39:08 | INFO  | Task 2dccaa29-5c4d-4317-8df4-8af02b60bf88 (wireguard) was prepared for execution. 2026-04-05 00:39:08.009807 | orchestrator | 2026-04-05 00:39:08 | INFO  | It takes a moment until task 2dccaa29-5c4d-4317-8df4-8af02b60bf88 (wireguard) has been started and output is visible here. 2026-04-05 00:39:28.279595 | orchestrator | 2026-04-05 00:39:28.279707 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-05 00:39:28.279723 | orchestrator | 2026-04-05 00:39:28.279735 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-05 00:39:28.279745 | orchestrator | Sunday 05 April 2026 00:39:11 +0000 (0:00:00.315) 0:00:00.315 ********** 2026-04-05 00:39:28.279756 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:28.279767 | orchestrator | 2026-04-05 00:39:28.279777 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-05 00:39:28.279787 | orchestrator | Sunday 05 April 2026 00:39:13 +0000 (0:00:01.946) 0:00:02.262 ********** 2026-04-05 00:39:28.279797 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:28.279808 | orchestrator | 2026-04-05 00:39:28.279818 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-05 00:39:28.279827 | orchestrator | Sunday 05 April 2026 00:39:20 +0000 (0:00:06.893) 0:00:09.156 ********** 2026-04-05 00:39:28.279837 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:28.279847 | orchestrator | 2026-04-05 00:39:28.279857 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-05 00:39:28.279866 | orchestrator | Sunday 05 April 2026 00:39:20 +0000 (0:00:00.546) 0:00:09.703 ********** 2026-04-05 00:39:28.279876 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:28.279909 | orchestrator | 2026-04-05 00:39:28.279919 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-05 00:39:28.279929 | orchestrator | Sunday 05 April 2026 00:39:21 +0000 (0:00:00.464) 0:00:10.167 ********** 2026-04-05 00:39:28.279939 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:28.279948 | orchestrator | 2026-04-05 00:39:28.279958 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-05 00:39:28.279968 | orchestrator | Sunday 05 April 2026 00:39:21 +0000 (0:00:00.564) 0:00:10.732 ********** 2026-04-05 00:39:28.279977 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:28.279987 | orchestrator | 2026-04-05 00:39:28.279996 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-05 00:39:28.280005 | orchestrator | Sunday 05 April 2026 00:39:22 +0000 (0:00:00.449) 0:00:11.181 ********** 2026-04-05 00:39:28.280015 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:28.280024 | orchestrator | 2026-04-05 00:39:28.280034 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-05 00:39:28.280044 | orchestrator | Sunday 05 April 2026 00:39:22 +0000 (0:00:00.412) 0:00:11.594 ********** 2026-04-05 00:39:28.280053 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:28.280063 | orchestrator | 2026-04-05 00:39:28.280072 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-05 00:39:28.280082 | orchestrator | Sunday 05 April 2026 00:39:23 +0000 (0:00:01.177) 0:00:12.771 ********** 2026-04-05 00:39:28.280091 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:39:28.280101 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:28.280111 | orchestrator | 2026-04-05 00:39:28.280120 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-05 00:39:28.280130 | orchestrator | Sunday 05 April 2026 00:39:24 +0000 (0:00:00.985) 0:00:13.757 ********** 2026-04-05 00:39:28.280139 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:28.280149 | orchestrator | 2026-04-05 00:39:28.280176 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-05 00:39:28.280188 | orchestrator | Sunday 05 April 2026 00:39:27 +0000 (0:00:02.126) 0:00:15.884 ********** 2026-04-05 00:39:28.280198 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:28.280209 | orchestrator | 2026-04-05 00:39:28.280220 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:39:28.280231 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:39:28.280243 | orchestrator | 2026-04-05 00:39:28.280254 | orchestrator | 2026-04-05 00:39:28.280266 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:39:28.280316 | orchestrator | Sunday 05 April 2026 00:39:28 +0000 (0:00:00.995) 0:00:16.879 ********** 2026-04-05 00:39:28.280328 | orchestrator | =============================================================================== 2026-04-05 00:39:28.280339 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.89s 2026-04-05 00:39:28.280350 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.13s 2026-04-05 00:39:28.280361 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.95s 2026-04-05 00:39:28.280372 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2026-04-05 00:39:28.280383 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.00s 2026-04-05 00:39:28.280394 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2026-04-05 00:39:28.280405 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2026-04-05 00:39:28.280416 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-04-05 00:39:28.280426 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2026-04-05 00:39:28.280437 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.45s 2026-04-05 00:39:28.280456 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-04-05 00:39:28.483227 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-05 00:39:28.519848 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-05 00:39:28.519956 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-05 00:39:28.594434 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 200 0 --:--:-- --:--:-- --:--:-- 202 2026-04-05 00:39:28.611715 | orchestrator | + osism apply --environment custom workarounds 2026-04-05 00:39:29.889507 | orchestrator | 2026-04-05 00:39:29 | INFO  | Trying to run play workarounds in environment custom 2026-04-05 00:39:39.957130 | orchestrator | 2026-04-05 00:39:39 | INFO  | Prepare task for execution of workarounds. 2026-04-05 00:39:40.044689 | orchestrator | 2026-04-05 00:39:40 | INFO  | Task 807f009f-8a6f-4e5d-a957-0457505462fb (workarounds) was prepared for execution. 2026-04-05 00:39:40.044788 | orchestrator | 2026-04-05 00:39:40 | INFO  | It takes a moment until task 807f009f-8a6f-4e5d-a957-0457505462fb (workarounds) has been started and output is visible here. 2026-04-05 00:40:05.121427 | orchestrator | 2026-04-05 00:40:05.121543 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:40:05.121588 | orchestrator | 2026-04-05 00:40:05.121601 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-05 00:40:05.121618 | orchestrator | Sunday 05 April 2026 00:39:43 +0000 (0:00:00.178) 0:00:00.178 ********** 2026-04-05 00:40:05.121639 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-05 00:40:05.121658 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-05 00:40:05.121678 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-05 00:40:05.121698 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-05 00:40:05.121717 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-05 00:40:05.121737 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-05 00:40:05.121757 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-05 00:40:05.121778 | orchestrator | 2026-04-05 00:40:05.121796 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-05 00:40:05.121814 | orchestrator | 2026-04-05 00:40:05.121833 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-05 00:40:05.121852 | orchestrator | Sunday 05 April 2026 00:39:44 +0000 (0:00:00.780) 0:00:00.958 ********** 2026-04-05 00:40:05.121873 | orchestrator | ok: [testbed-manager] 2026-04-05 00:40:05.121895 | orchestrator | 2026-04-05 00:40:05.121918 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-05 00:40:05.121938 | orchestrator | 2026-04-05 00:40:05.121959 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-05 00:40:05.121982 | orchestrator | Sunday 05 April 2026 00:39:47 +0000 (0:00:02.972) 0:00:03.931 ********** 2026-04-05 00:40:05.122002 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:40:05.122085 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:40:05.122108 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:40:05.122126 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:40:05.122144 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:40:05.122163 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:40:05.122183 | orchestrator | 2026-04-05 00:40:05.122201 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-05 00:40:05.122220 | orchestrator | 2026-04-05 00:40:05.122238 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-05 00:40:05.122274 | orchestrator | Sunday 05 April 2026 00:39:49 +0000 (0:00:02.413) 0:00:06.344 ********** 2026-04-05 00:40:05.122348 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:40:05.122374 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:40:05.122392 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:40:05.122411 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:40:05.122429 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:40:05.122448 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:40:05.122468 | orchestrator | 2026-04-05 00:40:05.122486 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-05 00:40:05.122505 | orchestrator | Sunday 05 April 2026 00:39:50 +0000 (0:00:01.357) 0:00:07.702 ********** 2026-04-05 00:40:05.122524 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:40:05.122544 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:40:05.122564 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:40:05.122584 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:40:05.122603 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:40:05.122623 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:40:05.122642 | orchestrator | 2026-04-05 00:40:05.122661 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-05 00:40:05.122681 | orchestrator | Sunday 05 April 2026 00:39:54 +0000 (0:00:03.853) 0:00:11.556 ********** 2026-04-05 00:40:05.122701 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:40:05.122720 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:40:05.122740 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:40:05.122759 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:40:05.122778 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:40:05.122791 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:40:05.122802 | orchestrator | 2026-04-05 00:40:05.122813 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-05 00:40:05.122823 | orchestrator | 2026-04-05 00:40:05.122834 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-05 00:40:05.122845 | orchestrator | Sunday 05 April 2026 00:39:55 +0000 (0:00:00.558) 0:00:12.115 ********** 2026-04-05 00:40:05.122856 | orchestrator | changed: [testbed-manager] 2026-04-05 00:40:05.122867 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:40:05.122878 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:40:05.122889 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:40:05.122900 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:40:05.122910 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:40:05.122921 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:40:05.122932 | orchestrator | 2026-04-05 00:40:05.122943 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-05 00:40:05.122953 | orchestrator | Sunday 05 April 2026 00:39:57 +0000 (0:00:01.799) 0:00:13.914 ********** 2026-04-05 00:40:05.122964 | orchestrator | changed: [testbed-manager] 2026-04-05 00:40:05.122975 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:40:05.122985 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:40:05.122996 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:40:05.123007 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:40:05.123017 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:40:05.123049 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:40:05.123061 | orchestrator | 2026-04-05 00:40:05.123072 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-05 00:40:05.123083 | orchestrator | Sunday 05 April 2026 00:39:58 +0000 (0:00:01.440) 0:00:15.355 ********** 2026-04-05 00:40:05.123094 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:40:05.123105 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:40:05.123115 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:40:05.123139 | orchestrator | ok: [testbed-manager] 2026-04-05 00:40:05.123150 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:40:05.123161 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:40:05.123172 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:40:05.123182 | orchestrator | 2026-04-05 00:40:05.123193 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-05 00:40:05.123204 | orchestrator | Sunday 05 April 2026 00:40:00 +0000 (0:00:01.629) 0:00:16.984 ********** 2026-04-05 00:40:05.123215 | orchestrator | changed: [testbed-manager] 2026-04-05 00:40:05.123226 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:40:05.123237 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:40:05.123248 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:40:05.123259 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:40:05.123269 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:40:05.123280 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:40:05.123291 | orchestrator | 2026-04-05 00:40:05.123321 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-05 00:40:05.123333 | orchestrator | Sunday 05 April 2026 00:40:01 +0000 (0:00:01.562) 0:00:18.547 ********** 2026-04-05 00:40:05.123344 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:40:05.123355 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:40:05.123365 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:40:05.123376 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:40:05.123387 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:40:05.123398 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:40:05.123408 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:40:05.123419 | orchestrator | 2026-04-05 00:40:05.123430 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-05 00:40:05.123441 | orchestrator | 2026-04-05 00:40:05.123452 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-05 00:40:05.123463 | orchestrator | Sunday 05 April 2026 00:40:02 +0000 (0:00:00.817) 0:00:19.365 ********** 2026-04-05 00:40:05.123474 | orchestrator | ok: [testbed-manager] 2026-04-05 00:40:05.123484 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:40:05.123495 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:40:05.123506 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:40:05.123526 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:40:05.123537 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:40:05.123548 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:40:05.123558 | orchestrator | 2026-04-05 00:40:05.123569 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:40:05.123581 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:40:05.123593 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:05.123604 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:05.123615 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:05.123626 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:05.123637 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:05.123648 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:05.123659 | orchestrator | 2026-04-05 00:40:05.123670 | orchestrator | 2026-04-05 00:40:05.123681 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:40:05.123700 | orchestrator | Sunday 05 April 2026 00:40:05 +0000 (0:00:02.535) 0:00:21.900 ********** 2026-04-05 00:40:05.123711 | orchestrator | =============================================================================== 2026-04-05 00:40:05.123722 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.85s 2026-04-05 00:40:05.123732 | orchestrator | Apply netplan configuration --------------------------------------------- 2.97s 2026-04-05 00:40:05.123743 | orchestrator | Install python3-docker -------------------------------------------------- 2.54s 2026-04-05 00:40:05.123754 | orchestrator | Apply netplan configuration --------------------------------------------- 2.41s 2026-04-05 00:40:05.123765 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.80s 2026-04-05 00:40:05.123776 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.63s 2026-04-05 00:40:05.123786 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.56s 2026-04-05 00:40:05.123797 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.44s 2026-04-05 00:40:05.123808 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.36s 2026-04-05 00:40:05.123819 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.82s 2026-04-05 00:40:05.123830 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.78s 2026-04-05 00:40:05.123848 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.56s 2026-04-05 00:40:05.470999 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-05 00:40:16.731900 | orchestrator | 2026-04-05 00:40:16 | INFO  | Prepare task for execution of reboot. 2026-04-05 00:40:16.826789 | orchestrator | 2026-04-05 00:40:16 | INFO  | Task 827ae3dc-8342-4c43-9cae-e350db2d0145 (reboot) was prepared for execution. 2026-04-05 00:40:16.826873 | orchestrator | 2026-04-05 00:40:16 | INFO  | It takes a moment until task 827ae3dc-8342-4c43-9cae-e350db2d0145 (reboot) has been started and output is visible here. 2026-04-05 00:40:28.399931 | orchestrator | 2026-04-05 00:40:28.400017 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:28.400025 | orchestrator | 2026-04-05 00:40:28.400032 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:28.400038 | orchestrator | Sunday 05 April 2026 00:40:20 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-04-05 00:40:28.400044 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:40:28.400051 | orchestrator | 2026-04-05 00:40:28.400057 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:28.400062 | orchestrator | Sunday 05 April 2026 00:40:20 +0000 (0:00:00.159) 0:00:00.416 ********** 2026-04-05 00:40:28.400068 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:40:28.400073 | orchestrator | 2026-04-05 00:40:28.400079 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:28.400084 | orchestrator | Sunday 05 April 2026 00:40:21 +0000 (0:00:01.256) 0:00:01.673 ********** 2026-04-05 00:40:28.400090 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:40:28.400095 | orchestrator | 2026-04-05 00:40:28.400100 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:28.400106 | orchestrator | 2026-04-05 00:40:28.400111 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:28.400116 | orchestrator | Sunday 05 April 2026 00:40:21 +0000 (0:00:00.113) 0:00:01.786 ********** 2026-04-05 00:40:28.400122 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:40:28.400127 | orchestrator | 2026-04-05 00:40:28.400132 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:28.400138 | orchestrator | Sunday 05 April 2026 00:40:21 +0000 (0:00:00.099) 0:00:01.886 ********** 2026-04-05 00:40:28.400155 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:40:28.400161 | orchestrator | 2026-04-05 00:40:28.400183 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:28.400190 | orchestrator | Sunday 05 April 2026 00:40:22 +0000 (0:00:01.034) 0:00:02.921 ********** 2026-04-05 00:40:28.400197 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:40:28.400203 | orchestrator | 2026-04-05 00:40:28.400209 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:28.400215 | orchestrator | 2026-04-05 00:40:28.400222 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:28.400228 | orchestrator | Sunday 05 April 2026 00:40:22 +0000 (0:00:00.137) 0:00:03.059 ********** 2026-04-05 00:40:28.400234 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:40:28.400240 | orchestrator | 2026-04-05 00:40:28.400246 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:28.400252 | orchestrator | Sunday 05 April 2026 00:40:23 +0000 (0:00:00.087) 0:00:03.147 ********** 2026-04-05 00:40:28.400258 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:40:28.400264 | orchestrator | 2026-04-05 00:40:28.400271 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:28.400277 | orchestrator | Sunday 05 April 2026 00:40:24 +0000 (0:00:01.076) 0:00:04.223 ********** 2026-04-05 00:40:28.400283 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:40:28.400289 | orchestrator | 2026-04-05 00:40:28.400295 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:28.400301 | orchestrator | 2026-04-05 00:40:28.400308 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:28.400314 | orchestrator | Sunday 05 April 2026 00:40:24 +0000 (0:00:00.119) 0:00:04.342 ********** 2026-04-05 00:40:28.400320 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:40:28.400356 | orchestrator | 2026-04-05 00:40:28.400363 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:28.400369 | orchestrator | Sunday 05 April 2026 00:40:24 +0000 (0:00:00.112) 0:00:04.455 ********** 2026-04-05 00:40:28.400375 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:40:28.400381 | orchestrator | 2026-04-05 00:40:28.400387 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:28.400394 | orchestrator | Sunday 05 April 2026 00:40:25 +0000 (0:00:01.014) 0:00:05.469 ********** 2026-04-05 00:40:28.400400 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:40:28.400406 | orchestrator | 2026-04-05 00:40:28.400414 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:28.400424 | orchestrator | 2026-04-05 00:40:28.400435 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:28.400445 | orchestrator | Sunday 05 April 2026 00:40:25 +0000 (0:00:00.127) 0:00:05.597 ********** 2026-04-05 00:40:28.400455 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:40:28.400466 | orchestrator | 2026-04-05 00:40:28.400479 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:28.400490 | orchestrator | Sunday 05 April 2026 00:40:25 +0000 (0:00:00.118) 0:00:05.716 ********** 2026-04-05 00:40:28.400502 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:40:28.400510 | orchestrator | 2026-04-05 00:40:28.400517 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:28.400524 | orchestrator | Sunday 05 April 2026 00:40:26 +0000 (0:00:01.144) 0:00:06.860 ********** 2026-04-05 00:40:28.400531 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:40:28.400538 | orchestrator | 2026-04-05 00:40:28.400545 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:28.400552 | orchestrator | 2026-04-05 00:40:28.400560 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:28.400567 | orchestrator | Sunday 05 April 2026 00:40:26 +0000 (0:00:00.138) 0:00:06.999 ********** 2026-04-05 00:40:28.400574 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:40:28.400581 | orchestrator | 2026-04-05 00:40:28.400588 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:28.400601 | orchestrator | Sunday 05 April 2026 00:40:27 +0000 (0:00:00.118) 0:00:07.117 ********** 2026-04-05 00:40:28.400609 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:40:28.400616 | orchestrator | 2026-04-05 00:40:28.400623 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:28.400630 | orchestrator | Sunday 05 April 2026 00:40:28 +0000 (0:00:01.036) 0:00:08.154 ********** 2026-04-05 00:40:28.400650 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:40:28.400657 | orchestrator | 2026-04-05 00:40:28.400664 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:40:28.400672 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:28.400681 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:28.400688 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:28.400696 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:28.400703 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:28.400710 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:28.400717 | orchestrator | 2026-04-05 00:40:28.400723 | orchestrator | 2026-04-05 00:40:28.400733 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:40:28.400740 | orchestrator | Sunday 05 April 2026 00:40:28 +0000 (0:00:00.047) 0:00:08.201 ********** 2026-04-05 00:40:28.400746 | orchestrator | =============================================================================== 2026-04-05 00:40:28.400752 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.56s 2026-04-05 00:40:28.400758 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.70s 2026-04-05 00:40:28.400765 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2026-04-05 00:40:28.611106 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-05 00:40:39.991176 | orchestrator | 2026-04-05 00:40:39 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-05 00:40:40.072851 | orchestrator | 2026-04-05 00:40:40 | INFO  | Task e76318d0-cd31-48f5-89ed-1c3ebf568e06 (wait-for-connection) was prepared for execution. 2026-04-05 00:40:40.072946 | orchestrator | 2026-04-05 00:40:40 | INFO  | It takes a moment until task e76318d0-cd31-48f5-89ed-1c3ebf568e06 (wait-for-connection) has been started and output is visible here. 2026-04-05 00:40:55.282670 | orchestrator | 2026-04-05 00:40:55.282783 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-05 00:40:55.282800 | orchestrator | 2026-04-05 00:40:55.282813 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-05 00:40:55.282824 | orchestrator | Sunday 05 April 2026 00:40:43 +0000 (0:00:00.322) 0:00:00.322 ********** 2026-04-05 00:40:55.282835 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:40:55.282848 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:40:55.282860 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:40:55.282871 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:40:55.282881 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:40:55.282892 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:40:55.282903 | orchestrator | 2026-04-05 00:40:55.282914 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:40:55.282954 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:55.282968 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:55.282979 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:55.282990 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:55.283001 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:55.283012 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:55.283023 | orchestrator | 2026-04-05 00:40:55.283034 | orchestrator | 2026-04-05 00:40:55.283045 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:40:55.283056 | orchestrator | Sunday 05 April 2026 00:40:54 +0000 (0:00:11.554) 0:00:11.877 ********** 2026-04-05 00:40:55.283067 | orchestrator | =============================================================================== 2026-04-05 00:40:55.283078 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.55s 2026-04-05 00:40:55.484041 | orchestrator | + osism apply hddtemp 2026-04-05 00:41:06.885560 | orchestrator | 2026-04-05 00:41:06 | INFO  | Prepare task for execution of hddtemp. 2026-04-05 00:41:06.967042 | orchestrator | 2026-04-05 00:41:06 | INFO  | Task 4ab8b6e8-e801-4945-b24c-1a2fd8dffd96 (hddtemp) was prepared for execution. 2026-04-05 00:41:06.967133 | orchestrator | 2026-04-05 00:41:06 | INFO  | It takes a moment until task 4ab8b6e8-e801-4945-b24c-1a2fd8dffd96 (hddtemp) has been started and output is visible here. 2026-04-05 00:41:34.721530 | orchestrator | 2026-04-05 00:41:34.721622 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-05 00:41:34.721631 | orchestrator | 2026-04-05 00:41:34.721638 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-05 00:41:34.721644 | orchestrator | Sunday 05 April 2026 00:41:10 +0000 (0:00:00.360) 0:00:00.360 ********** 2026-04-05 00:41:34.721650 | orchestrator | ok: [testbed-manager] 2026-04-05 00:41:34.721658 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:41:34.721664 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:41:34.721670 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:41:34.721676 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:41:34.721681 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:41:34.721687 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:41:34.721693 | orchestrator | 2026-04-05 00:41:34.721699 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-05 00:41:34.721704 | orchestrator | Sunday 05 April 2026 00:41:11 +0000 (0:00:00.667) 0:00:01.028 ********** 2026-04-05 00:41:34.721711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:41:34.721719 | orchestrator | 2026-04-05 00:41:34.721725 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-05 00:41:34.721744 | orchestrator | Sunday 05 April 2026 00:41:12 +0000 (0:00:01.290) 0:00:02.319 ********** 2026-04-05 00:41:34.721751 | orchestrator | ok: [testbed-manager] 2026-04-05 00:41:34.721756 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:41:34.721762 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:41:34.721768 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:41:34.721773 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:41:34.721779 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:41:34.721785 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:41:34.721808 | orchestrator | 2026-04-05 00:41:34.721814 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-05 00:41:34.721820 | orchestrator | Sunday 05 April 2026 00:41:15 +0000 (0:00:02.624) 0:00:04.943 ********** 2026-04-05 00:41:34.721825 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:41:34.721832 | orchestrator | changed: [testbed-manager] 2026-04-05 00:41:34.721847 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:41:34.721853 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:41:34.721859 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:41:34.721864 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:41:34.721870 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:41:34.721876 | orchestrator | 2026-04-05 00:41:34.721881 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-05 00:41:34.721887 | orchestrator | Sunday 05 April 2026 00:41:16 +0000 (0:00:01.005) 0:00:05.949 ********** 2026-04-05 00:41:34.721893 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:41:34.721899 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:41:34.721904 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:41:34.721910 | orchestrator | ok: [testbed-manager] 2026-04-05 00:41:34.721916 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:41:34.721922 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:41:34.721927 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:41:34.721933 | orchestrator | 2026-04-05 00:41:34.721939 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-05 00:41:34.721945 | orchestrator | Sunday 05 April 2026 00:41:18 +0000 (0:00:02.310) 0:00:08.259 ********** 2026-04-05 00:41:34.721950 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:41:34.721955 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:41:34.721961 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:41:34.721966 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:41:34.721971 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:41:34.721977 | orchestrator | changed: [testbed-manager] 2026-04-05 00:41:34.721982 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:41:34.721987 | orchestrator | 2026-04-05 00:41:34.721993 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-05 00:41:34.721998 | orchestrator | Sunday 05 April 2026 00:41:19 +0000 (0:00:00.691) 0:00:08.951 ********** 2026-04-05 00:41:34.722004 | orchestrator | changed: [testbed-manager] 2026-04-05 00:41:34.722009 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:41:34.722046 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:41:34.722054 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:41:34.722059 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:41:34.722065 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:41:34.722070 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:41:34.722076 | orchestrator | 2026-04-05 00:41:34.722081 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-05 00:41:34.722087 | orchestrator | Sunday 05 April 2026 00:41:31 +0000 (0:00:12.156) 0:00:21.107 ********** 2026-04-05 00:41:34.722093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:41:34.722098 | orchestrator | 2026-04-05 00:41:34.722104 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-05 00:41:34.722109 | orchestrator | Sunday 05 April 2026 00:41:32 +0000 (0:00:01.197) 0:00:22.305 ********** 2026-04-05 00:41:34.722115 | orchestrator | changed: [testbed-manager] 2026-04-05 00:41:34.722120 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:41:34.722125 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:41:34.722131 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:41:34.722136 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:41:34.722142 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:41:34.722147 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:41:34.722152 | orchestrator | 2026-04-05 00:41:34.722158 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:41:34.722169 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:41:34.722188 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:34.722195 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:34.722200 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:34.722206 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:34.722211 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:34.722216 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:34.722222 | orchestrator | 2026-04-05 00:41:34.722227 | orchestrator | 2026-04-05 00:41:34.722233 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:41:34.722238 | orchestrator | Sunday 05 April 2026 00:41:34 +0000 (0:00:01.863) 0:00:24.169 ********** 2026-04-05 00:41:34.722244 | orchestrator | =============================================================================== 2026-04-05 00:41:34.722249 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.16s 2026-04-05 00:41:34.722255 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.62s 2026-04-05 00:41:34.722260 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.31s 2026-04-05 00:41:34.722266 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.86s 2026-04-05 00:41:34.722272 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.29s 2026-04-05 00:41:34.722277 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.20s 2026-04-05 00:41:34.722282 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.01s 2026-04-05 00:41:34.722288 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.69s 2026-04-05 00:41:34.722293 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.67s 2026-04-05 00:41:34.931042 | orchestrator | ++ semver latest 7.1.1 2026-04-05 00:41:34.987877 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 00:41:34.987967 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 00:41:34.987981 | orchestrator | + sudo systemctl restart manager.service 2026-04-05 00:41:52.576039 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 00:41:52.576150 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-05 00:41:52.576167 | orchestrator | + local max_attempts=60 2026-04-05 00:41:52.576180 | orchestrator | + local name=ceph-ansible 2026-04-05 00:41:52.576191 | orchestrator | + local attempt_num=1 2026-04-05 00:41:52.576831 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:41:52.618103 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:41:52.618192 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:41:52.618204 | orchestrator | + sleep 5 2026-04-05 00:41:57.623861 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:41:57.772632 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:41:57.772764 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:41:57.772782 | orchestrator | + sleep 5 2026-04-05 00:42:02.776859 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:02.821879 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:02.821970 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:02.822066 | orchestrator | + sleep 5 2026-04-05 00:42:07.826897 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:07.863705 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:07.864451 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:07.864555 | orchestrator | + sleep 5 2026-04-05 00:42:12.868896 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:12.907208 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:12.907307 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:12.907322 | orchestrator | + sleep 5 2026-04-05 00:42:17.913188 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:17.957346 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:17.957506 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:17.957524 | orchestrator | + sleep 5 2026-04-05 00:42:22.962671 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:22.998253 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:22.998432 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:22.998449 | orchestrator | + sleep 5 2026-04-05 00:42:28.004531 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:28.045745 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:28.045820 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:28.045835 | orchestrator | + sleep 5 2026-04-05 00:42:33.048958 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:33.096518 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:33.096617 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:33.096632 | orchestrator | + sleep 5 2026-04-05 00:42:38.102687 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:38.141050 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:38.141120 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:38.141127 | orchestrator | + sleep 5 2026-04-05 00:42:43.144859 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:43.183609 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:43.183696 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:43.183710 | orchestrator | + sleep 5 2026-04-05 00:42:48.188942 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:48.230119 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:48.230196 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:48.230207 | orchestrator | + sleep 5 2026-04-05 00:42:53.235126 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:53.276498 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:53.276605 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:53.276619 | orchestrator | + sleep 5 2026-04-05 00:42:58.282530 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:58.321261 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:58.321340 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-05 00:42:58.321358 | orchestrator | + local max_attempts=60 2026-04-05 00:42:58.321384 | orchestrator | + local name=kolla-ansible 2026-04-05 00:42:58.321399 | orchestrator | + local attempt_num=1 2026-04-05 00:42:58.321783 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-05 00:42:58.351304 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:58.351379 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-05 00:42:58.351392 | orchestrator | + local max_attempts=60 2026-04-05 00:42:58.351404 | orchestrator | + local name=osism-ansible 2026-04-05 00:42:58.351415 | orchestrator | + local attempt_num=1 2026-04-05 00:42:58.351445 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-05 00:42:58.393137 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:58.393207 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 00:42:58.393219 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-05 00:42:58.548709 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-05 00:42:58.718254 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-05 00:42:58.881390 | orchestrator | ARA in osism-ansible already disabled. 2026-04-05 00:42:59.041562 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-05 00:42:59.041683 | orchestrator | + osism apply gather-facts 2026-04-05 00:43:10.650739 | orchestrator | 2026-04-05 00:43:10 | INFO  | Prepare task for execution of gather-facts. 2026-04-05 00:43:10.728270 | orchestrator | 2026-04-05 00:43:10 | INFO  | Task 69ad623c-3eaf-46b7-a097-3587452a4d03 (gather-facts) was prepared for execution. 2026-04-05 00:43:10.728369 | orchestrator | 2026-04-05 00:43:10 | INFO  | It takes a moment until task 69ad623c-3eaf-46b7-a097-3587452a4d03 (gather-facts) has been started and output is visible here. 2026-04-05 00:43:20.681725 | orchestrator | 2026-04-05 00:43:20.681799 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:43:20.681807 | orchestrator | 2026-04-05 00:43:20.681812 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:43:20.681817 | orchestrator | Sunday 05 April 2026 00:43:14 +0000 (0:00:00.304) 0:00:00.304 ********** 2026-04-05 00:43:20.681821 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:43:20.681826 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:43:20.681830 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:43:20.681834 | orchestrator | ok: [testbed-manager] 2026-04-05 00:43:20.681838 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:43:20.681842 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:43:20.681846 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:43:20.681850 | orchestrator | 2026-04-05 00:43:20.681854 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 00:43:20.681858 | orchestrator | 2026-04-05 00:43:20.681861 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 00:43:20.681865 | orchestrator | Sunday 05 April 2026 00:43:19 +0000 (0:00:05.548) 0:00:05.852 ********** 2026-04-05 00:43:20.681869 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:43:20.681874 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:43:20.681878 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:43:20.681882 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:43:20.681886 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:43:20.681889 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:20.681893 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:20.681897 | orchestrator | 2026-04-05 00:43:20.681901 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:43:20.681905 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:20.681910 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:20.681914 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:20.681918 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:20.681922 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:20.681926 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:20.681929 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:20.681933 | orchestrator | 2026-04-05 00:43:20.681937 | orchestrator | 2026-04-05 00:43:20.681941 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:43:20.681945 | orchestrator | Sunday 05 April 2026 00:43:20 +0000 (0:00:00.662) 0:00:06.515 ********** 2026-04-05 00:43:20.681949 | orchestrator | =============================================================================== 2026-04-05 00:43:20.681953 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.55s 2026-04-05 00:43:20.681975 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.66s 2026-04-05 00:43:20.933733 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-05 00:43:20.952454 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-05 00:43:20.965891 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-05 00:43:20.985543 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-05 00:43:21.006251 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-05 00:43:21.025296 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-05 00:43:21.043185 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-05 00:43:21.054954 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-05 00:43:21.065062 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-05 00:43:21.077180 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-05 00:43:21.094825 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-05 00:43:21.108757 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-05 00:43:21.123607 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-05 00:43:21.138056 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-05 00:43:21.153612 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-05 00:43:21.167497 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-05 00:43:21.182624 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-05 00:43:21.197073 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-05 00:43:21.217599 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-05 00:43:21.238649 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-05 00:43:21.258180 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-05 00:43:21.274274 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-05 00:43:21.294308 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-05 00:43:21.314767 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-05 00:43:21.512365 | orchestrator | ok: Runtime: 0:24:44.185859 2026-04-05 00:43:21.616717 | 2026-04-05 00:43:21.616901 | TASK [Deploy services] 2026-04-05 00:43:22.156623 | orchestrator | skipping: Conditional result was False 2026-04-05 00:43:22.177548 | 2026-04-05 00:43:22.177738 | TASK [Deploy in a nutshell] 2026-04-05 00:43:22.892002 | orchestrator | + set -e 2026-04-05 00:43:22.892143 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 00:43:22.892158 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 00:43:22.892171 | orchestrator | ++ INTERACTIVE=false 2026-04-05 00:43:22.892179 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 00:43:22.892186 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 00:43:22.892205 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 00:43:22.892233 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 00:43:22.892250 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 00:43:22.892258 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 00:43:22.892267 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 00:43:22.892274 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 00:43:22.892285 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 00:43:22.892291 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 00:43:22.892303 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 00:43:22.892309 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-05 00:43:22.892317 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-05 00:43:22.892323 | orchestrator | ++ export ARA=false 2026-04-05 00:43:22.892329 | orchestrator | ++ ARA=false 2026-04-05 00:43:22.892335 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 00:43:22.892342 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 00:43:22.892348 | orchestrator | ++ export TEMPEST=true 2026-04-05 00:43:22.892353 | orchestrator | ++ TEMPEST=true 2026-04-05 00:43:22.892359 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 00:43:22.892365 | orchestrator | ++ IS_ZUUL=true 2026-04-05 00:43:22.892371 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 00:43:22.892381 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 00:43:22.892387 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 00:43:22.892392 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 00:43:22.892398 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 00:43:22.892404 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 00:43:22.892410 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 00:43:22.892416 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 00:43:22.892421 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 00:43:22.892427 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 00:43:22.892433 | orchestrator | + echo 2026-04-05 00:43:22.892471 | orchestrator | 2026-04-05 00:43:22.892477 | orchestrator | # PULL IMAGES 2026-04-05 00:43:22.892483 | orchestrator | 2026-04-05 00:43:22.892492 | orchestrator | + echo '# PULL IMAGES' 2026-04-05 00:43:22.892498 | orchestrator | + echo 2026-04-05 00:43:22.893335 | orchestrator | ++ semver latest 7.0.0 2026-04-05 00:43:22.953428 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 00:43:22.953574 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 00:43:22.953625 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-05 00:43:24.333488 | orchestrator | 2026-04-05 00:43:24 | INFO  | Trying to run play pull-images in environment custom 2026-04-05 00:43:34.387868 | orchestrator | 2026-04-05 00:43:34 | INFO  | Prepare task for execution of pull-images. 2026-04-05 00:43:34.472395 | orchestrator | 2026-04-05 00:43:34 | INFO  | Task ac9075d0-7cff-473b-9e3a-5939ef904dc8 (pull-images) was prepared for execution. 2026-04-05 00:43:34.472533 | orchestrator | 2026-04-05 00:43:34 | INFO  | Task ac9075d0-7cff-473b-9e3a-5939ef904dc8 is running in background. No more output. Check ARA for logs. 2026-04-05 00:43:36.124658 | orchestrator | 2026-04-05 00:43:36 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-05 00:43:46.303867 | orchestrator | 2026-04-05 00:43:46 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-05 00:43:46.387120 | orchestrator | 2026-04-05 00:43:46 | INFO  | Task cd4d80c4-1d8c-42ff-8115-952cc9231898 (wipe-partitions) was prepared for execution. 2026-04-05 00:43:46.387217 | orchestrator | 2026-04-05 00:43:46 | INFO  | It takes a moment until task cd4d80c4-1d8c-42ff-8115-952cc9231898 (wipe-partitions) has been started and output is visible here. 2026-04-05 00:43:58.065358 | orchestrator | 2026-04-05 00:43:58.065578 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-05 00:43:58.065602 | orchestrator | 2026-04-05 00:43:58.065612 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-05 00:43:58.065630 | orchestrator | Sunday 05 April 2026 00:43:49 +0000 (0:00:00.162) 0:00:00.163 ********** 2026-04-05 00:43:58.065668 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:43:58.065680 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:43:58.065689 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:43:58.065697 | orchestrator | 2026-04-05 00:43:58.065706 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-05 00:43:58.065715 | orchestrator | Sunday 05 April 2026 00:43:50 +0000 (0:00:01.029) 0:00:01.193 ********** 2026-04-05 00:43:58.065728 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:43:58.065737 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:58.065746 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:58.065755 | orchestrator | 2026-04-05 00:43:58.065764 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-05 00:43:58.065772 | orchestrator | Sunday 05 April 2026 00:43:50 +0000 (0:00:00.240) 0:00:01.433 ********** 2026-04-05 00:43:58.065781 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:43:58.065790 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:43:58.065799 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:43:58.065807 | orchestrator | 2026-04-05 00:43:58.065816 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-05 00:43:58.065825 | orchestrator | Sunday 05 April 2026 00:43:51 +0000 (0:00:00.537) 0:00:01.970 ********** 2026-04-05 00:43:58.065834 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:43:58.065841 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:58.065849 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:58.065858 | orchestrator | 2026-04-05 00:43:58.065867 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-05 00:43:58.065877 | orchestrator | Sunday 05 April 2026 00:43:51 +0000 (0:00:00.246) 0:00:02.217 ********** 2026-04-05 00:43:58.065886 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 00:43:58.065899 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 00:43:58.065908 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 00:43:58.065918 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 00:43:58.065928 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 00:43:58.065937 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 00:43:58.065947 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 00:43:58.065957 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 00:43:58.065967 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 00:43:58.065977 | orchestrator | 2026-04-05 00:43:58.065987 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-05 00:43:58.065996 | orchestrator | Sunday 05 April 2026 00:43:52 +0000 (0:00:01.264) 0:00:03.481 ********** 2026-04-05 00:43:58.066007 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 00:43:58.066109 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 00:43:58.066123 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 00:43:58.066133 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 00:43:58.066142 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 00:43:58.066150 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 00:43:58.066157 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 00:43:58.066165 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 00:43:58.066173 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 00:43:58.066181 | orchestrator | 2026-04-05 00:43:58.066195 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-05 00:43:58.066204 | orchestrator | Sunday 05 April 2026 00:43:54 +0000 (0:00:01.366) 0:00:04.847 ********** 2026-04-05 00:43:58.066211 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 00:43:58.066219 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 00:43:58.066227 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 00:43:58.066235 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 00:43:58.066251 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 00:43:58.066259 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 00:43:58.066267 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 00:43:58.066274 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 00:43:58.066282 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 00:43:58.066290 | orchestrator | 2026-04-05 00:43:58.066298 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-05 00:43:58.066306 | orchestrator | Sunday 05 April 2026 00:43:56 +0000 (0:00:02.037) 0:00:06.885 ********** 2026-04-05 00:43:58.066314 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:43:58.066322 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:43:58.066330 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:43:58.066338 | orchestrator | 2026-04-05 00:43:58.066346 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-05 00:43:58.066354 | orchestrator | Sunday 05 April 2026 00:43:56 +0000 (0:00:00.593) 0:00:07.479 ********** 2026-04-05 00:43:58.066361 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:43:58.066369 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:43:58.066377 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:43:58.066385 | orchestrator | 2026-04-05 00:43:58.066393 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:43:58.066402 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:43:58.066412 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:43:58.066438 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:43:58.066447 | orchestrator | 2026-04-05 00:43:58.066479 | orchestrator | 2026-04-05 00:43:58.066494 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:43:58.066503 | orchestrator | Sunday 05 April 2026 00:43:57 +0000 (0:00:00.803) 0:00:08.283 ********** 2026-04-05 00:43:58.066511 | orchestrator | =============================================================================== 2026-04-05 00:43:58.066518 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.04s 2026-04-05 00:43:58.066526 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.37s 2026-04-05 00:43:58.066534 | orchestrator | Check device availability ----------------------------------------------- 1.26s 2026-04-05 00:43:58.066542 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.03s 2026-04-05 00:43:58.066550 | orchestrator | Request device events from the kernel ----------------------------------- 0.80s 2026-04-05 00:43:58.066558 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2026-04-05 00:43:58.066565 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-04-05 00:43:58.066573 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-04-05 00:43:58.066581 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2026-04-05 00:44:09.628115 | orchestrator | 2026-04-05 00:44:09 | INFO  | Prepare task for execution of facts. 2026-04-05 00:44:09.703444 | orchestrator | 2026-04-05 00:44:09 | INFO  | Task b81ff8bb-63be-4617-97f0-cbc5758259c3 (facts) was prepared for execution. 2026-04-05 00:44:09.703655 | orchestrator | 2026-04-05 00:44:09 | INFO  | It takes a moment until task b81ff8bb-63be-4617-97f0-cbc5758259c3 (facts) has been started and output is visible here. 2026-04-05 00:44:21.550418 | orchestrator | 2026-04-05 00:44:21.550526 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 00:44:21.550539 | orchestrator | 2026-04-05 00:44:21.550578 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 00:44:21.550586 | orchestrator | Sunday 05 April 2026 00:44:13 +0000 (0:00:00.398) 0:00:00.398 ********** 2026-04-05 00:44:21.550593 | orchestrator | ok: [testbed-manager] 2026-04-05 00:44:21.550602 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:44:21.550610 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:44:21.550617 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:44:21.550624 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:21.550631 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:21.550637 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:44:21.550643 | orchestrator | 2026-04-05 00:44:21.550649 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 00:44:21.550656 | orchestrator | Sunday 05 April 2026 00:44:14 +0000 (0:00:01.465) 0:00:01.863 ********** 2026-04-05 00:44:21.550662 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:44:21.550670 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:44:21.550677 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:44:21.550684 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:44:21.550691 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:21.550698 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:21.550705 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:21.550712 | orchestrator | 2026-04-05 00:44:21.550719 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:44:21.550742 | orchestrator | 2026-04-05 00:44:21.550749 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:44:21.550757 | orchestrator | Sunday 05 April 2026 00:44:15 +0000 (0:00:01.226) 0:00:03.090 ********** 2026-04-05 00:44:21.550764 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:44:21.550772 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:44:21.550779 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:44:21.550786 | orchestrator | ok: [testbed-manager] 2026-04-05 00:44:21.550793 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:21.550799 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:44:21.550806 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:21.550813 | orchestrator | 2026-04-05 00:44:21.550820 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 00:44:21.550828 | orchestrator | 2026-04-05 00:44:21.550835 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 00:44:21.550843 | orchestrator | Sunday 05 April 2026 00:44:20 +0000 (0:00:04.696) 0:00:07.786 ********** 2026-04-05 00:44:21.550850 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:44:21.550857 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:44:21.550865 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:44:21.550872 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:44:21.550880 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:21.550887 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:21.550894 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:21.550901 | orchestrator | 2026-04-05 00:44:21.550908 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:44:21.550916 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:21.550924 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:21.550932 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:21.550939 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:21.550946 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:21.550963 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:21.550970 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:21.550978 | orchestrator | 2026-04-05 00:44:21.550987 | orchestrator | 2026-04-05 00:44:21.550995 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:44:21.551007 | orchestrator | Sunday 05 April 2026 00:44:21 +0000 (0:00:00.576) 0:00:08.363 ********** 2026-04-05 00:44:21.551015 | orchestrator | =============================================================================== 2026-04-05 00:44:21.551022 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.70s 2026-04-05 00:44:21.551029 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.47s 2026-04-05 00:44:21.551037 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2026-04-05 00:44:21.551045 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-04-05 00:44:23.234765 | orchestrator | 2026-04-05 00:44:23 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-05 00:44:23.299679 | orchestrator | 2026-04-05 00:44:23 | INFO  | Task 386d4ce4-2c2d-4224-abbc-be961701dad0 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-05 00:44:23.299776 | orchestrator | 2026-04-05 00:44:23 | INFO  | It takes a moment until task 386d4ce4-2c2d-4224-abbc-be961701dad0 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-05 00:44:36.172836 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 00:44:36.172936 | orchestrator | 2.16.14 2026-04-05 00:44:36.172952 | orchestrator | 2026-04-05 00:44:36.172965 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 00:44:36.172978 | orchestrator | 2026-04-05 00:44:36.172989 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:44:36.173000 | orchestrator | Sunday 05 April 2026 00:44:28 +0000 (0:00:00.299) 0:00:00.299 ********** 2026-04-05 00:44:36.173012 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:36.173023 | orchestrator | 2026-04-05 00:44:36.173034 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:44:36.173045 | orchestrator | Sunday 05 April 2026 00:44:28 +0000 (0:00:00.231) 0:00:00.530 ********** 2026-04-05 00:44:36.173057 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:36.173068 | orchestrator | 2026-04-05 00:44:36.173079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.173095 | orchestrator | Sunday 05 April 2026 00:44:28 +0000 (0:00:00.238) 0:00:00.769 ********** 2026-04-05 00:44:36.173127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:44:36.173147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:44:36.173165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:44:36.173182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:44:36.173199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:44:36.173220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:44:36.173239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:44:36.173260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:44:36.173279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-05 00:44:36.173298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:44:36.173351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:44:36.173374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:44:36.173396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:44:36.173415 | orchestrator | 2026-04-05 00:44:36.173435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.173456 | orchestrator | Sunday 05 April 2026 00:44:29 +0000 (0:00:00.377) 0:00:01.146 ********** 2026-04-05 00:44:36.173502 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.173522 | orchestrator | 2026-04-05 00:44:36.173542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.173561 | orchestrator | Sunday 05 April 2026 00:44:29 +0000 (0:00:00.493) 0:00:01.640 ********** 2026-04-05 00:44:36.173580 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.173600 | orchestrator | 2026-04-05 00:44:36.173620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.173645 | orchestrator | Sunday 05 April 2026 00:44:29 +0000 (0:00:00.217) 0:00:01.858 ********** 2026-04-05 00:44:36.173664 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.173684 | orchestrator | 2026-04-05 00:44:36.173702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.173720 | orchestrator | Sunday 05 April 2026 00:44:30 +0000 (0:00:00.229) 0:00:02.087 ********** 2026-04-05 00:44:36.173739 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.173758 | orchestrator | 2026-04-05 00:44:36.173777 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.173796 | orchestrator | Sunday 05 April 2026 00:44:30 +0000 (0:00:00.212) 0:00:02.300 ********** 2026-04-05 00:44:36.173814 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.173832 | orchestrator | 2026-04-05 00:44:36.173851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.173870 | orchestrator | Sunday 05 April 2026 00:44:30 +0000 (0:00:00.205) 0:00:02.505 ********** 2026-04-05 00:44:36.173909 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.173943 | orchestrator | 2026-04-05 00:44:36.173961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.173980 | orchestrator | Sunday 05 April 2026 00:44:30 +0000 (0:00:00.219) 0:00:02.725 ********** 2026-04-05 00:44:36.173999 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.174097 | orchestrator | 2026-04-05 00:44:36.174122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.174139 | orchestrator | Sunday 05 April 2026 00:44:30 +0000 (0:00:00.204) 0:00:02.930 ********** 2026-04-05 00:44:36.174159 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.174178 | orchestrator | 2026-04-05 00:44:36.174195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.174212 | orchestrator | Sunday 05 April 2026 00:44:31 +0000 (0:00:00.230) 0:00:03.160 ********** 2026-04-05 00:44:36.174230 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5) 2026-04-05 00:44:36.174249 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5) 2026-04-05 00:44:36.174269 | orchestrator | 2026-04-05 00:44:36.174287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.174331 | orchestrator | Sunday 05 April 2026 00:44:31 +0000 (0:00:00.459) 0:00:03.620 ********** 2026-04-05 00:44:36.174352 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772) 2026-04-05 00:44:36.174370 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772) 2026-04-05 00:44:36.174389 | orchestrator | 2026-04-05 00:44:36.174418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.174454 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.508) 0:00:04.128 ********** 2026-04-05 00:44:36.174513 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064) 2026-04-05 00:44:36.174534 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064) 2026-04-05 00:44:36.174552 | orchestrator | 2026-04-05 00:44:36.174571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.174589 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.706) 0:00:04.834 ********** 2026-04-05 00:44:36.174608 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e) 2026-04-05 00:44:36.174627 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e) 2026-04-05 00:44:36.174646 | orchestrator | 2026-04-05 00:44:36.174664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:36.174682 | orchestrator | Sunday 05 April 2026 00:44:33 +0000 (0:00:00.651) 0:00:05.486 ********** 2026-04-05 00:44:36.174702 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:44:36.174720 | orchestrator | 2026-04-05 00:44:36.174737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:36.174749 | orchestrator | Sunday 05 April 2026 00:44:34 +0000 (0:00:00.828) 0:00:06.314 ********** 2026-04-05 00:44:36.174760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:44:36.174770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:44:36.174781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:44:36.174792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:44:36.174810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:44:36.174829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:44:36.174847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:44:36.174866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:44:36.174883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-05 00:44:36.174901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:44:36.174921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:44:36.174938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:44:36.174958 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:44:36.174977 | orchestrator | 2026-04-05 00:44:36.174995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:36.175008 | orchestrator | Sunday 05 April 2026 00:44:34 +0000 (0:00:00.433) 0:00:06.748 ********** 2026-04-05 00:44:36.175019 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.175030 | orchestrator | 2026-04-05 00:44:36.175041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:36.175051 | orchestrator | Sunday 05 April 2026 00:44:34 +0000 (0:00:00.212) 0:00:06.961 ********** 2026-04-05 00:44:36.175062 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.175072 | orchestrator | 2026-04-05 00:44:36.175083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:36.175094 | orchestrator | Sunday 05 April 2026 00:44:35 +0000 (0:00:00.206) 0:00:07.167 ********** 2026-04-05 00:44:36.175104 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.175126 | orchestrator | 2026-04-05 00:44:36.175137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:36.175148 | orchestrator | Sunday 05 April 2026 00:44:35 +0000 (0:00:00.258) 0:00:07.426 ********** 2026-04-05 00:44:36.175159 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.175169 | orchestrator | 2026-04-05 00:44:36.175180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:36.175191 | orchestrator | Sunday 05 April 2026 00:44:35 +0000 (0:00:00.206) 0:00:07.632 ********** 2026-04-05 00:44:36.175202 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.175212 | orchestrator | 2026-04-05 00:44:36.175223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:36.175234 | orchestrator | Sunday 05 April 2026 00:44:35 +0000 (0:00:00.200) 0:00:07.833 ********** 2026-04-05 00:44:36.175244 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.175255 | orchestrator | 2026-04-05 00:44:36.175266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:36.175276 | orchestrator | Sunday 05 April 2026 00:44:35 +0000 (0:00:00.211) 0:00:08.044 ********** 2026-04-05 00:44:36.175287 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:36.175298 | orchestrator | 2026-04-05 00:44:36.175320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:44.312176 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.183) 0:00:08.228 ********** 2026-04-05 00:44:44.312326 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.312338 | orchestrator | 2026-04-05 00:44:44.312347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:44.312355 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.210) 0:00:08.438 ********** 2026-04-05 00:44:44.312363 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-05 00:44:44.312371 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-05 00:44:44.312379 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-05 00:44:44.312387 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-05 00:44:44.312448 | orchestrator | 2026-04-05 00:44:44.312458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:44.312508 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:01.044) 0:00:09.482 ********** 2026-04-05 00:44:44.312518 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.312525 | orchestrator | 2026-04-05 00:44:44.312534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:44.312543 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:00.206) 0:00:09.689 ********** 2026-04-05 00:44:44.312553 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.312561 | orchestrator | 2026-04-05 00:44:44.312570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:44.312578 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:00.210) 0:00:09.900 ********** 2026-04-05 00:44:44.312587 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.312595 | orchestrator | 2026-04-05 00:44:44.312602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:44.312610 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.205) 0:00:10.105 ********** 2026-04-05 00:44:44.312617 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.312624 | orchestrator | 2026-04-05 00:44:44.312631 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 00:44:44.312639 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.218) 0:00:10.324 ********** 2026-04-05 00:44:44.312646 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-05 00:44:44.312653 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-05 00:44:44.312661 | orchestrator | 2026-04-05 00:44:44.312668 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 00:44:44.312675 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.190) 0:00:10.515 ********** 2026-04-05 00:44:44.312710 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.312718 | orchestrator | 2026-04-05 00:44:44.312726 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 00:44:44.312734 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.131) 0:00:10.647 ********** 2026-04-05 00:44:44.312741 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.312749 | orchestrator | 2026-04-05 00:44:44.312757 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 00:44:44.312765 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.151) 0:00:10.798 ********** 2026-04-05 00:44:44.312772 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.312780 | orchestrator | 2026-04-05 00:44:44.312788 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 00:44:44.312796 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.145) 0:00:10.944 ********** 2026-04-05 00:44:44.312804 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:44.312812 | orchestrator | 2026-04-05 00:44:44.312819 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 00:44:44.312827 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.148) 0:00:11.092 ********** 2026-04-05 00:44:44.312837 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd7e6aba-230a-5307-afd3-3b474950d4d0'}}) 2026-04-05 00:44:44.312845 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ffa9e237-b4c6-554d-9530-d8db42979c07'}}) 2026-04-05 00:44:44.312853 | orchestrator | 2026-04-05 00:44:44.312861 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 00:44:44.312869 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.192) 0:00:11.285 ********** 2026-04-05 00:44:44.312878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd7e6aba-230a-5307-afd3-3b474950d4d0'}})  2026-04-05 00:44:44.312895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ffa9e237-b4c6-554d-9530-d8db42979c07'}})  2026-04-05 00:44:44.312907 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.312914 | orchestrator | 2026-04-05 00:44:44.312922 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 00:44:44.312929 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.144) 0:00:11.430 ********** 2026-04-05 00:44:44.312936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd7e6aba-230a-5307-afd3-3b474950d4d0'}})  2026-04-05 00:44:44.312944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ffa9e237-b4c6-554d-9530-d8db42979c07'}})  2026-04-05 00:44:44.312952 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.312960 | orchestrator | 2026-04-05 00:44:44.312967 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 00:44:44.312975 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.164) 0:00:11.594 ********** 2026-04-05 00:44:44.312983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd7e6aba-230a-5307-afd3-3b474950d4d0'}})  2026-04-05 00:44:44.313012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ffa9e237-b4c6-554d-9530-d8db42979c07'}})  2026-04-05 00:44:44.313021 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.313029 | orchestrator | 2026-04-05 00:44:44.313037 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 00:44:44.313046 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.385) 0:00:11.979 ********** 2026-04-05 00:44:44.313054 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:44.313062 | orchestrator | 2026-04-05 00:44:44.313070 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 00:44:44.313078 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.148) 0:00:12.128 ********** 2026-04-05 00:44:44.313086 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:44.313101 | orchestrator | 2026-04-05 00:44:44.313108 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 00:44:44.313115 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.153) 0:00:12.281 ********** 2026-04-05 00:44:44.313122 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.313130 | orchestrator | 2026-04-05 00:44:44.313137 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 00:44:44.313144 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.140) 0:00:12.421 ********** 2026-04-05 00:44:44.313151 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.313159 | orchestrator | 2026-04-05 00:44:44.313166 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 00:44:44.313173 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.130) 0:00:12.552 ********** 2026-04-05 00:44:44.313180 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.313187 | orchestrator | 2026-04-05 00:44:44.313194 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 00:44:44.313201 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.132) 0:00:12.684 ********** 2026-04-05 00:44:44.313208 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:44:44.313215 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:44:44.313223 | orchestrator |  "sdb": { 2026-04-05 00:44:44.313230 | orchestrator |  "osd_lvm_uuid": "bd7e6aba-230a-5307-afd3-3b474950d4d0" 2026-04-05 00:44:44.313238 | orchestrator |  }, 2026-04-05 00:44:44.313245 | orchestrator |  "sdc": { 2026-04-05 00:44:44.313253 | orchestrator |  "osd_lvm_uuid": "ffa9e237-b4c6-554d-9530-d8db42979c07" 2026-04-05 00:44:44.313261 | orchestrator |  } 2026-04-05 00:44:44.313270 | orchestrator |  } 2026-04-05 00:44:44.313277 | orchestrator | } 2026-04-05 00:44:44.313285 | orchestrator | 2026-04-05 00:44:44.313292 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 00:44:44.313299 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.141) 0:00:12.826 ********** 2026-04-05 00:44:44.313306 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.313313 | orchestrator | 2026-04-05 00:44:44.313320 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 00:44:44.313326 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.127) 0:00:12.953 ********** 2026-04-05 00:44:44.313332 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.313340 | orchestrator | 2026-04-05 00:44:44.313347 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 00:44:44.313355 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.133) 0:00:13.087 ********** 2026-04-05 00:44:44.313363 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:44.313370 | orchestrator | 2026-04-05 00:44:44.313378 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 00:44:44.313385 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.118) 0:00:13.205 ********** 2026-04-05 00:44:44.313392 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 00:44:44.313399 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 00:44:44.313407 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:44:44.313415 | orchestrator |  "sdb": { 2026-04-05 00:44:44.313423 | orchestrator |  "osd_lvm_uuid": "bd7e6aba-230a-5307-afd3-3b474950d4d0" 2026-04-05 00:44:44.313431 | orchestrator |  }, 2026-04-05 00:44:44.313439 | orchestrator |  "sdc": { 2026-04-05 00:44:44.313447 | orchestrator |  "osd_lvm_uuid": "ffa9e237-b4c6-554d-9530-d8db42979c07" 2026-04-05 00:44:44.313455 | orchestrator |  } 2026-04-05 00:44:44.313463 | orchestrator |  }, 2026-04-05 00:44:44.313471 | orchestrator |  "lvm_volumes": [ 2026-04-05 00:44:44.313518 | orchestrator |  { 2026-04-05 00:44:44.313526 | orchestrator |  "data": "osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0", 2026-04-05 00:44:44.313533 | orchestrator |  "data_vg": "ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0" 2026-04-05 00:44:44.313546 | orchestrator |  }, 2026-04-05 00:44:44.313554 | orchestrator |  { 2026-04-05 00:44:44.313560 | orchestrator |  "data": "osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07", 2026-04-05 00:44:44.313567 | orchestrator |  "data_vg": "ceph-ffa9e237-b4c6-554d-9530-d8db42979c07" 2026-04-05 00:44:44.313574 | orchestrator |  } 2026-04-05 00:44:44.313582 | orchestrator |  ] 2026-04-05 00:44:44.313589 | orchestrator |  } 2026-04-05 00:44:44.313596 | orchestrator | } 2026-04-05 00:44:44.313603 | orchestrator | 2026-04-05 00:44:44.313610 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 00:44:44.313617 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.199) 0:00:13.405 ********** 2026-04-05 00:44:44.313622 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:44.313628 | orchestrator | 2026-04-05 00:44:44.313634 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 00:44:44.313640 | orchestrator | 2026-04-05 00:44:44.313647 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:44:44.313653 | orchestrator | Sunday 05 April 2026 00:44:43 +0000 (0:00:02.416) 0:00:15.821 ********** 2026-04-05 00:44:44.313659 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:44.313665 | orchestrator | 2026-04-05 00:44:44.313672 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:44:44.313679 | orchestrator | Sunday 05 April 2026 00:44:44 +0000 (0:00:00.309) 0:00:16.131 ********** 2026-04-05 00:44:44.313685 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:44.313691 | orchestrator | 2026-04-05 00:44:44.313704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.098286 | orchestrator | Sunday 05 April 2026 00:44:44 +0000 (0:00:00.239) 0:00:16.370 ********** 2026-04-05 00:44:52.098398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:44:52.098414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:44:52.098425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:44:52.098437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:44:52.098447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:44:52.098458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:44:52.098469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:44:52.098535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:44:52.098547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-05 00:44:52.098559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:44:52.098570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:44:52.098580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:44:52.098611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:44:52.098623 | orchestrator | 2026-04-05 00:44:52.098634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.098645 | orchestrator | Sunday 05 April 2026 00:44:44 +0000 (0:00:00.424) 0:00:16.794 ********** 2026-04-05 00:44:52.098657 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.098669 | orchestrator | 2026-04-05 00:44:52.098680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.098691 | orchestrator | Sunday 05 April 2026 00:44:44 +0000 (0:00:00.173) 0:00:16.968 ********** 2026-04-05 00:44:52.098721 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.098732 | orchestrator | 2026-04-05 00:44:52.098743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.098754 | orchestrator | Sunday 05 April 2026 00:44:45 +0000 (0:00:00.228) 0:00:17.197 ********** 2026-04-05 00:44:52.098765 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.098776 | orchestrator | 2026-04-05 00:44:52.098787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.098798 | orchestrator | Sunday 05 April 2026 00:44:45 +0000 (0:00:00.179) 0:00:17.376 ********** 2026-04-05 00:44:52.098808 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.098819 | orchestrator | 2026-04-05 00:44:52.098830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.098841 | orchestrator | Sunday 05 April 2026 00:44:45 +0000 (0:00:00.176) 0:00:17.554 ********** 2026-04-05 00:44:52.098852 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.098863 | orchestrator | 2026-04-05 00:44:52.098874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.098884 | orchestrator | Sunday 05 April 2026 00:44:45 +0000 (0:00:00.180) 0:00:17.735 ********** 2026-04-05 00:44:52.098895 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.098906 | orchestrator | 2026-04-05 00:44:52.098917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.098928 | orchestrator | Sunday 05 April 2026 00:44:46 +0000 (0:00:00.668) 0:00:18.403 ********** 2026-04-05 00:44:52.098938 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.098949 | orchestrator | 2026-04-05 00:44:52.098960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.098971 | orchestrator | Sunday 05 April 2026 00:44:46 +0000 (0:00:00.220) 0:00:18.623 ********** 2026-04-05 00:44:52.098982 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.098993 | orchestrator | 2026-04-05 00:44:52.099003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.099014 | orchestrator | Sunday 05 April 2026 00:44:46 +0000 (0:00:00.247) 0:00:18.871 ********** 2026-04-05 00:44:52.099025 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3) 2026-04-05 00:44:52.099037 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3) 2026-04-05 00:44:52.099048 | orchestrator | 2026-04-05 00:44:52.099059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.099069 | orchestrator | Sunday 05 April 2026 00:44:47 +0000 (0:00:00.448) 0:00:19.319 ********** 2026-04-05 00:44:52.099080 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654) 2026-04-05 00:44:52.099091 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654) 2026-04-05 00:44:52.099102 | orchestrator | 2026-04-05 00:44:52.099113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.099123 | orchestrator | Sunday 05 April 2026 00:44:47 +0000 (0:00:00.460) 0:00:19.780 ********** 2026-04-05 00:44:52.099134 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0) 2026-04-05 00:44:52.099145 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0) 2026-04-05 00:44:52.099156 | orchestrator | 2026-04-05 00:44:52.099167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.099194 | orchestrator | Sunday 05 April 2026 00:44:48 +0000 (0:00:00.503) 0:00:20.283 ********** 2026-04-05 00:44:52.099206 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637) 2026-04-05 00:44:52.099217 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637) 2026-04-05 00:44:52.099228 | orchestrator | 2026-04-05 00:44:52.099246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:52.099257 | orchestrator | Sunday 05 April 2026 00:44:48 +0000 (0:00:00.480) 0:00:20.764 ********** 2026-04-05 00:44:52.099268 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:44:52.099279 | orchestrator | 2026-04-05 00:44:52.099290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:52.099300 | orchestrator | Sunday 05 April 2026 00:44:49 +0000 (0:00:00.382) 0:00:21.146 ********** 2026-04-05 00:44:52.099311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:44:52.099322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:44:52.099340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:44:52.099351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:44:52.099361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:44:52.099372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:44:52.099383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:44:52.099393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:44:52.099404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-05 00:44:52.099414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:44:52.099425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:44:52.099436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:44:52.099447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:44:52.099457 | orchestrator | 2026-04-05 00:44:52.099468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:52.099496 | orchestrator | Sunday 05 April 2026 00:44:49 +0000 (0:00:00.396) 0:00:21.543 ********** 2026-04-05 00:44:52.099507 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.099518 | orchestrator | 2026-04-05 00:44:52.099529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:52.099540 | orchestrator | Sunday 05 April 2026 00:44:49 +0000 (0:00:00.185) 0:00:21.729 ********** 2026-04-05 00:44:52.099551 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.099561 | orchestrator | 2026-04-05 00:44:52.099572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:52.099583 | orchestrator | Sunday 05 April 2026 00:44:50 +0000 (0:00:00.510) 0:00:22.240 ********** 2026-04-05 00:44:52.099594 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.099605 | orchestrator | 2026-04-05 00:44:52.099616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:52.099626 | orchestrator | Sunday 05 April 2026 00:44:50 +0000 (0:00:00.190) 0:00:22.430 ********** 2026-04-05 00:44:52.099637 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.099648 | orchestrator | 2026-04-05 00:44:52.099659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:52.099670 | orchestrator | Sunday 05 April 2026 00:44:50 +0000 (0:00:00.179) 0:00:22.610 ********** 2026-04-05 00:44:52.099681 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.099691 | orchestrator | 2026-04-05 00:44:52.099702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:52.099713 | orchestrator | Sunday 05 April 2026 00:44:50 +0000 (0:00:00.221) 0:00:22.831 ********** 2026-04-05 00:44:52.099724 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.099742 | orchestrator | 2026-04-05 00:44:52.099753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:52.099764 | orchestrator | Sunday 05 April 2026 00:44:50 +0000 (0:00:00.169) 0:00:23.000 ********** 2026-04-05 00:44:52.099775 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.099786 | orchestrator | 2026-04-05 00:44:52.099797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:52.099808 | orchestrator | Sunday 05 April 2026 00:44:51 +0000 (0:00:00.208) 0:00:23.208 ********** 2026-04-05 00:44:52.099819 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:52.099830 | orchestrator | 2026-04-05 00:44:52.099840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:52.099851 | orchestrator | Sunday 05 April 2026 00:44:51 +0000 (0:00:00.218) 0:00:23.427 ********** 2026-04-05 00:44:52.099862 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-05 00:44:52.099873 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-05 00:44:52.099884 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-05 00:44:52.099895 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-05 00:44:52.099905 | orchestrator | 2026-04-05 00:44:52.099916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:52.099927 | orchestrator | Sunday 05 April 2026 00:44:51 +0000 (0:00:00.619) 0:00:24.046 ********** 2026-04-05 00:44:52.099938 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.824311 | orchestrator | 2026-04-05 00:44:58.824416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:58.824431 | orchestrator | Sunday 05 April 2026 00:44:52 +0000 (0:00:00.187) 0:00:24.234 ********** 2026-04-05 00:44:58.824442 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.824453 | orchestrator | 2026-04-05 00:44:58.824463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:58.824473 | orchestrator | Sunday 05 April 2026 00:44:52 +0000 (0:00:00.170) 0:00:24.404 ********** 2026-04-05 00:44:58.824578 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.824590 | orchestrator | 2026-04-05 00:44:58.824600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:58.824610 | orchestrator | Sunday 05 April 2026 00:44:52 +0000 (0:00:00.195) 0:00:24.599 ********** 2026-04-05 00:44:58.824619 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.824629 | orchestrator | 2026-04-05 00:44:58.824639 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 00:44:58.824648 | orchestrator | Sunday 05 April 2026 00:44:52 +0000 (0:00:00.175) 0:00:24.775 ********** 2026-04-05 00:44:58.824658 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-05 00:44:58.824668 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-05 00:44:58.824678 | orchestrator | 2026-04-05 00:44:58.824688 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 00:44:58.824717 | orchestrator | Sunday 05 April 2026 00:44:53 +0000 (0:00:00.384) 0:00:25.159 ********** 2026-04-05 00:44:58.824727 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.824737 | orchestrator | 2026-04-05 00:44:58.824747 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 00:44:58.824756 | orchestrator | Sunday 05 April 2026 00:44:53 +0000 (0:00:00.106) 0:00:25.265 ********** 2026-04-05 00:44:58.824766 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.824775 | orchestrator | 2026-04-05 00:44:58.824785 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 00:44:58.824798 | orchestrator | Sunday 05 April 2026 00:44:53 +0000 (0:00:00.124) 0:00:25.389 ********** 2026-04-05 00:44:58.824808 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.824818 | orchestrator | 2026-04-05 00:44:58.824828 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 00:44:58.824839 | orchestrator | Sunday 05 April 2026 00:44:53 +0000 (0:00:00.118) 0:00:25.508 ********** 2026-04-05 00:44:58.824872 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:58.824885 | orchestrator | 2026-04-05 00:44:58.824895 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 00:44:58.824906 | orchestrator | Sunday 05 April 2026 00:44:53 +0000 (0:00:00.116) 0:00:25.624 ********** 2026-04-05 00:44:58.824918 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c330a934-8550-546d-8551-a9ce4f4a4f0f'}}) 2026-04-05 00:44:58.824930 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '824ea9fd-8e44-5b08-9075-8333765a455e'}}) 2026-04-05 00:44:58.824940 | orchestrator | 2026-04-05 00:44:58.824952 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 00:44:58.824964 | orchestrator | Sunday 05 April 2026 00:44:53 +0000 (0:00:00.154) 0:00:25.779 ********** 2026-04-05 00:44:58.824975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c330a934-8550-546d-8551-a9ce4f4a4f0f'}})  2026-04-05 00:44:58.824989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '824ea9fd-8e44-5b08-9075-8333765a455e'}})  2026-04-05 00:44:58.825000 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.825010 | orchestrator | 2026-04-05 00:44:58.825019 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 00:44:58.825029 | orchestrator | Sunday 05 April 2026 00:44:53 +0000 (0:00:00.129) 0:00:25.908 ********** 2026-04-05 00:44:58.825038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c330a934-8550-546d-8551-a9ce4f4a4f0f'}})  2026-04-05 00:44:58.825048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '824ea9fd-8e44-5b08-9075-8333765a455e'}})  2026-04-05 00:44:58.825058 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.825068 | orchestrator | 2026-04-05 00:44:58.825077 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 00:44:58.825086 | orchestrator | Sunday 05 April 2026 00:44:53 +0000 (0:00:00.133) 0:00:26.042 ********** 2026-04-05 00:44:58.825096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c330a934-8550-546d-8551-a9ce4f4a4f0f'}})  2026-04-05 00:44:58.825106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '824ea9fd-8e44-5b08-9075-8333765a455e'}})  2026-04-05 00:44:58.825115 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.825125 | orchestrator | 2026-04-05 00:44:58.825134 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 00:44:58.825143 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.163) 0:00:26.206 ********** 2026-04-05 00:44:58.825153 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:58.825162 | orchestrator | 2026-04-05 00:44:58.825172 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 00:44:58.825181 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.146) 0:00:26.353 ********** 2026-04-05 00:44:58.825191 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:58.825200 | orchestrator | 2026-04-05 00:44:58.825210 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 00:44:58.825219 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.120) 0:00:26.474 ********** 2026-04-05 00:44:58.825246 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.825256 | orchestrator | 2026-04-05 00:44:58.825266 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 00:44:58.825276 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.123) 0:00:26.597 ********** 2026-04-05 00:44:58.825285 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.825295 | orchestrator | 2026-04-05 00:44:58.825305 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 00:44:58.825314 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.259) 0:00:26.857 ********** 2026-04-05 00:44:58.825324 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.825340 | orchestrator | 2026-04-05 00:44:58.825350 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 00:44:58.825359 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.138) 0:00:26.995 ********** 2026-04-05 00:44:58.825369 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:44:58.825378 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:44:58.825388 | orchestrator |  "sdb": { 2026-04-05 00:44:58.825398 | orchestrator |  "osd_lvm_uuid": "c330a934-8550-546d-8551-a9ce4f4a4f0f" 2026-04-05 00:44:58.825408 | orchestrator |  }, 2026-04-05 00:44:58.825417 | orchestrator |  "sdc": { 2026-04-05 00:44:58.825427 | orchestrator |  "osd_lvm_uuid": "824ea9fd-8e44-5b08-9075-8333765a455e" 2026-04-05 00:44:58.825437 | orchestrator |  } 2026-04-05 00:44:58.825446 | orchestrator |  } 2026-04-05 00:44:58.825456 | orchestrator | } 2026-04-05 00:44:58.825466 | orchestrator | 2026-04-05 00:44:58.825476 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 00:44:58.825550 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.165) 0:00:27.160 ********** 2026-04-05 00:44:58.825560 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.825570 | orchestrator | 2026-04-05 00:44:58.825579 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 00:44:58.825589 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.125) 0:00:27.286 ********** 2026-04-05 00:44:58.825598 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.825608 | orchestrator | 2026-04-05 00:44:58.825617 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 00:44:58.825627 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.124) 0:00:27.410 ********** 2026-04-05 00:44:58.825636 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:58.825646 | orchestrator | 2026-04-05 00:44:58.825656 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 00:44:58.825672 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.135) 0:00:27.545 ********** 2026-04-05 00:44:58.825682 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 00:44:58.825692 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 00:44:58.825702 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:44:58.825712 | orchestrator |  "sdb": { 2026-04-05 00:44:58.825721 | orchestrator |  "osd_lvm_uuid": "c330a934-8550-546d-8551-a9ce4f4a4f0f" 2026-04-05 00:44:58.825731 | orchestrator |  }, 2026-04-05 00:44:58.825741 | orchestrator |  "sdc": { 2026-04-05 00:44:58.825751 | orchestrator |  "osd_lvm_uuid": "824ea9fd-8e44-5b08-9075-8333765a455e" 2026-04-05 00:44:58.825761 | orchestrator |  } 2026-04-05 00:44:58.825769 | orchestrator |  }, 2026-04-05 00:44:58.825777 | orchestrator |  "lvm_volumes": [ 2026-04-05 00:44:58.825785 | orchestrator |  { 2026-04-05 00:44:58.825792 | orchestrator |  "data": "osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f", 2026-04-05 00:44:58.825800 | orchestrator |  "data_vg": "ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f" 2026-04-05 00:44:58.825808 | orchestrator |  }, 2026-04-05 00:44:58.825816 | orchestrator |  { 2026-04-05 00:44:58.825824 | orchestrator |  "data": "osd-block-824ea9fd-8e44-5b08-9075-8333765a455e", 2026-04-05 00:44:58.825831 | orchestrator |  "data_vg": "ceph-824ea9fd-8e44-5b08-9075-8333765a455e" 2026-04-05 00:44:58.825839 | orchestrator |  } 2026-04-05 00:44:58.825847 | orchestrator |  ] 2026-04-05 00:44:58.825855 | orchestrator |  } 2026-04-05 00:44:58.825863 | orchestrator | } 2026-04-05 00:44:58.825870 | orchestrator | 2026-04-05 00:44:58.825878 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 00:44:58.825886 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.213) 0:00:27.759 ********** 2026-04-05 00:44:58.825894 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:58.825902 | orchestrator | 2026-04-05 00:44:58.825916 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 00:44:58.825924 | orchestrator | 2026-04-05 00:44:58.825932 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:44:58.825940 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:01.342) 0:00:29.102 ********** 2026-04-05 00:44:58.825947 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:58.825955 | orchestrator | 2026-04-05 00:44:58.825963 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:44:58.825971 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.577) 0:00:29.680 ********** 2026-04-05 00:44:58.825979 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:44:58.825987 | orchestrator | 2026-04-05 00:44:58.825995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:58.826003 | orchestrator | Sunday 05 April 2026 00:44:58 +0000 (0:00:00.864) 0:00:30.544 ********** 2026-04-05 00:44:58.826010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:44:58.826073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:44:58.826082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:44:58.826090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:44:58.826098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:44:58.826112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:45:08.416391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:45:08.416526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:45:08.416545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-05 00:45:08.416557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:45:08.416568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:45:08.416579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:45:08.416590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:45:08.416601 | orchestrator | 2026-04-05 00:45:08.416612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.416624 | orchestrator | Sunday 05 April 2026 00:44:58 +0000 (0:00:00.424) 0:00:30.968 ********** 2026-04-05 00:45:08.416635 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.416646 | orchestrator | 2026-04-05 00:45:08.416658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.416668 | orchestrator | Sunday 05 April 2026 00:44:59 +0000 (0:00:00.209) 0:00:31.178 ********** 2026-04-05 00:45:08.416679 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.416690 | orchestrator | 2026-04-05 00:45:08.416701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.416711 | orchestrator | Sunday 05 April 2026 00:44:59 +0000 (0:00:00.194) 0:00:31.372 ********** 2026-04-05 00:45:08.416722 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.416733 | orchestrator | 2026-04-05 00:45:08.416749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.416769 | orchestrator | Sunday 05 April 2026 00:44:59 +0000 (0:00:00.273) 0:00:31.645 ********** 2026-04-05 00:45:08.416787 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.416806 | orchestrator | 2026-04-05 00:45:08.416825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.416844 | orchestrator | Sunday 05 April 2026 00:44:59 +0000 (0:00:00.343) 0:00:31.989 ********** 2026-04-05 00:45:08.416890 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.416903 | orchestrator | 2026-04-05 00:45:08.416915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.416929 | orchestrator | Sunday 05 April 2026 00:45:00 +0000 (0:00:00.281) 0:00:32.271 ********** 2026-04-05 00:45:08.416942 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.416955 | orchestrator | 2026-04-05 00:45:08.416968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.416980 | orchestrator | Sunday 05 April 2026 00:45:00 +0000 (0:00:00.339) 0:00:32.611 ********** 2026-04-05 00:45:08.416995 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.417015 | orchestrator | 2026-04-05 00:45:08.417034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.417053 | orchestrator | Sunday 05 April 2026 00:45:00 +0000 (0:00:00.299) 0:00:32.910 ********** 2026-04-05 00:45:08.417072 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.417090 | orchestrator | 2026-04-05 00:45:08.417110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.417129 | orchestrator | Sunday 05 April 2026 00:45:01 +0000 (0:00:00.246) 0:00:33.156 ********** 2026-04-05 00:45:08.417148 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9) 2026-04-05 00:45:08.417170 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9) 2026-04-05 00:45:08.417189 | orchestrator | 2026-04-05 00:45:08.417224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.417238 | orchestrator | Sunday 05 April 2026 00:45:02 +0000 (0:00:00.950) 0:00:34.107 ********** 2026-04-05 00:45:08.417266 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2) 2026-04-05 00:45:08.417280 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2) 2026-04-05 00:45:08.417293 | orchestrator | 2026-04-05 00:45:08.417305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.417316 | orchestrator | Sunday 05 April 2026 00:45:02 +0000 (0:00:00.884) 0:00:34.991 ********** 2026-04-05 00:45:08.417327 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba) 2026-04-05 00:45:08.417338 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba) 2026-04-05 00:45:08.417349 | orchestrator | 2026-04-05 00:45:08.417360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.417370 | orchestrator | Sunday 05 April 2026 00:45:03 +0000 (0:00:00.532) 0:00:35.524 ********** 2026-04-05 00:45:08.417381 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2) 2026-04-05 00:45:08.417392 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2) 2026-04-05 00:45:08.417403 | orchestrator | 2026-04-05 00:45:08.417414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:08.417425 | orchestrator | Sunday 05 April 2026 00:45:03 +0000 (0:00:00.506) 0:00:36.030 ********** 2026-04-05 00:45:08.417435 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:45:08.417446 | orchestrator | 2026-04-05 00:45:08.417457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.417507 | orchestrator | Sunday 05 April 2026 00:45:04 +0000 (0:00:00.361) 0:00:36.392 ********** 2026-04-05 00:45:08.417519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:45:08.417530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:45:08.417542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:45:08.417553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:45:08.417572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:45:08.417582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:45:08.417593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:45:08.417604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:45:08.417614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-05 00:45:08.417625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:45:08.417636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:45:08.417647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:45:08.417657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:45:08.417668 | orchestrator | 2026-04-05 00:45:08.417679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.417690 | orchestrator | Sunday 05 April 2026 00:45:04 +0000 (0:00:00.424) 0:00:36.816 ********** 2026-04-05 00:45:08.417700 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.417711 | orchestrator | 2026-04-05 00:45:08.417722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.417732 | orchestrator | Sunday 05 April 2026 00:45:04 +0000 (0:00:00.228) 0:00:37.045 ********** 2026-04-05 00:45:08.417743 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.417754 | orchestrator | 2026-04-05 00:45:08.417765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.417775 | orchestrator | Sunday 05 April 2026 00:45:05 +0000 (0:00:00.202) 0:00:37.247 ********** 2026-04-05 00:45:08.417786 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.417797 | orchestrator | 2026-04-05 00:45:08.417808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.417818 | orchestrator | Sunday 05 April 2026 00:45:05 +0000 (0:00:00.235) 0:00:37.483 ********** 2026-04-05 00:45:08.417829 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.417840 | orchestrator | 2026-04-05 00:45:08.417850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.417861 | orchestrator | Sunday 05 April 2026 00:45:05 +0000 (0:00:00.228) 0:00:37.712 ********** 2026-04-05 00:45:08.417872 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.417883 | orchestrator | 2026-04-05 00:45:08.417893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.417904 | orchestrator | Sunday 05 April 2026 00:45:05 +0000 (0:00:00.197) 0:00:37.909 ********** 2026-04-05 00:45:08.417915 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.417926 | orchestrator | 2026-04-05 00:45:08.417937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.417947 | orchestrator | Sunday 05 April 2026 00:45:06 +0000 (0:00:00.706) 0:00:38.616 ********** 2026-04-05 00:45:08.417958 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.417968 | orchestrator | 2026-04-05 00:45:08.417979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.417990 | orchestrator | Sunday 05 April 2026 00:45:06 +0000 (0:00:00.226) 0:00:38.842 ********** 2026-04-05 00:45:08.418000 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.418011 | orchestrator | 2026-04-05 00:45:08.418080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.418092 | orchestrator | Sunday 05 April 2026 00:45:07 +0000 (0:00:00.247) 0:00:39.089 ********** 2026-04-05 00:45:08.418102 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-05 00:45:08.418120 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-05 00:45:08.418131 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-05 00:45:08.418142 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-05 00:45:08.418153 | orchestrator | 2026-04-05 00:45:08.418163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.418174 | orchestrator | Sunday 05 April 2026 00:45:07 +0000 (0:00:00.629) 0:00:39.719 ********** 2026-04-05 00:45:08.418185 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.418196 | orchestrator | 2026-04-05 00:45:08.418206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.418217 | orchestrator | Sunday 05 April 2026 00:45:07 +0000 (0:00:00.194) 0:00:39.913 ********** 2026-04-05 00:45:08.418228 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.418239 | orchestrator | 2026-04-05 00:45:08.418249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.418260 | orchestrator | Sunday 05 April 2026 00:45:08 +0000 (0:00:00.180) 0:00:40.094 ********** 2026-04-05 00:45:08.418271 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.418282 | orchestrator | 2026-04-05 00:45:08.418292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:08.418303 | orchestrator | Sunday 05 April 2026 00:45:08 +0000 (0:00:00.178) 0:00:40.272 ********** 2026-04-05 00:45:08.418314 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:08.418325 | orchestrator | 2026-04-05 00:45:08.418343 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 00:45:12.287208 | orchestrator | Sunday 05 April 2026 00:45:08 +0000 (0:00:00.201) 0:00:40.474 ********** 2026-04-05 00:45:12.287261 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-05 00:45:12.287266 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-05 00:45:12.287270 | orchestrator | 2026-04-05 00:45:12.287275 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 00:45:12.287279 | orchestrator | Sunday 05 April 2026 00:45:08 +0000 (0:00:00.168) 0:00:40.643 ********** 2026-04-05 00:45:12.287283 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287287 | orchestrator | 2026-04-05 00:45:12.287291 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 00:45:12.287295 | orchestrator | Sunday 05 April 2026 00:45:08 +0000 (0:00:00.141) 0:00:40.785 ********** 2026-04-05 00:45:12.287308 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287312 | orchestrator | 2026-04-05 00:45:12.287315 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 00:45:12.287319 | orchestrator | Sunday 05 April 2026 00:45:08 +0000 (0:00:00.133) 0:00:40.918 ********** 2026-04-05 00:45:12.287323 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287327 | orchestrator | 2026-04-05 00:45:12.287331 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 00:45:12.287335 | orchestrator | Sunday 05 April 2026 00:45:08 +0000 (0:00:00.139) 0:00:41.058 ********** 2026-04-05 00:45:12.287338 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:12.287343 | orchestrator | 2026-04-05 00:45:12.287347 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 00:45:12.287351 | orchestrator | Sunday 05 April 2026 00:45:09 +0000 (0:00:00.270) 0:00:41.329 ********** 2026-04-05 00:45:12.287355 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3bb92c70-c222-5380-a7bf-d21f250fcd2a'}}) 2026-04-05 00:45:12.287361 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '377d1900-3c05-5c55-820b-3d4ba19b512c'}}) 2026-04-05 00:45:12.287365 | orchestrator | 2026-04-05 00:45:12.287368 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 00:45:12.287372 | orchestrator | Sunday 05 April 2026 00:45:09 +0000 (0:00:00.167) 0:00:41.496 ********** 2026-04-05 00:45:12.287376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3bb92c70-c222-5380-a7bf-d21f250fcd2a'}})  2026-04-05 00:45:12.287390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '377d1900-3c05-5c55-820b-3d4ba19b512c'}})  2026-04-05 00:45:12.287395 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287399 | orchestrator | 2026-04-05 00:45:12.287402 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 00:45:12.287406 | orchestrator | Sunday 05 April 2026 00:45:09 +0000 (0:00:00.133) 0:00:41.630 ********** 2026-04-05 00:45:12.287410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3bb92c70-c222-5380-a7bf-d21f250fcd2a'}})  2026-04-05 00:45:12.287414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '377d1900-3c05-5c55-820b-3d4ba19b512c'}})  2026-04-05 00:45:12.287418 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287421 | orchestrator | 2026-04-05 00:45:12.287425 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 00:45:12.287429 | orchestrator | Sunday 05 April 2026 00:45:09 +0000 (0:00:00.150) 0:00:41.780 ********** 2026-04-05 00:45:12.287432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3bb92c70-c222-5380-a7bf-d21f250fcd2a'}})  2026-04-05 00:45:12.287436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '377d1900-3c05-5c55-820b-3d4ba19b512c'}})  2026-04-05 00:45:12.287440 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287444 | orchestrator | 2026-04-05 00:45:12.287448 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 00:45:12.287451 | orchestrator | Sunday 05 April 2026 00:45:09 +0000 (0:00:00.150) 0:00:41.931 ********** 2026-04-05 00:45:12.287455 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:12.287459 | orchestrator | 2026-04-05 00:45:12.287462 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 00:45:12.287466 | orchestrator | Sunday 05 April 2026 00:45:09 +0000 (0:00:00.130) 0:00:42.062 ********** 2026-04-05 00:45:12.287470 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:12.287473 | orchestrator | 2026-04-05 00:45:12.287477 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 00:45:12.287511 | orchestrator | Sunday 05 April 2026 00:45:10 +0000 (0:00:00.134) 0:00:42.196 ********** 2026-04-05 00:45:12.287516 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287519 | orchestrator | 2026-04-05 00:45:12.287523 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 00:45:12.287527 | orchestrator | Sunday 05 April 2026 00:45:10 +0000 (0:00:00.142) 0:00:42.339 ********** 2026-04-05 00:45:12.287531 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287535 | orchestrator | 2026-04-05 00:45:12.287538 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 00:45:12.287542 | orchestrator | Sunday 05 April 2026 00:45:10 +0000 (0:00:00.120) 0:00:42.460 ********** 2026-04-05 00:45:12.287546 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287550 | orchestrator | 2026-04-05 00:45:12.287553 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 00:45:12.287557 | orchestrator | Sunday 05 April 2026 00:45:10 +0000 (0:00:00.130) 0:00:42.590 ********** 2026-04-05 00:45:12.287561 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:45:12.287565 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:45:12.287568 | orchestrator |  "sdb": { 2026-04-05 00:45:12.287581 | orchestrator |  "osd_lvm_uuid": "3bb92c70-c222-5380-a7bf-d21f250fcd2a" 2026-04-05 00:45:12.287585 | orchestrator |  }, 2026-04-05 00:45:12.287589 | orchestrator |  "sdc": { 2026-04-05 00:45:12.287593 | orchestrator |  "osd_lvm_uuid": "377d1900-3c05-5c55-820b-3d4ba19b512c" 2026-04-05 00:45:12.287597 | orchestrator |  } 2026-04-05 00:45:12.287600 | orchestrator |  } 2026-04-05 00:45:12.287604 | orchestrator | } 2026-04-05 00:45:12.287608 | orchestrator | 2026-04-05 00:45:12.287615 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 00:45:12.287618 | orchestrator | Sunday 05 April 2026 00:45:10 +0000 (0:00:00.137) 0:00:42.727 ********** 2026-04-05 00:45:12.287622 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287626 | orchestrator | 2026-04-05 00:45:12.287630 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 00:45:12.287633 | orchestrator | Sunday 05 April 2026 00:45:10 +0000 (0:00:00.117) 0:00:42.845 ********** 2026-04-05 00:45:12.287637 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287641 | orchestrator | 2026-04-05 00:45:12.287644 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 00:45:12.287648 | orchestrator | Sunday 05 April 2026 00:45:11 +0000 (0:00:00.273) 0:00:43.119 ********** 2026-04-05 00:45:12.287652 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:12.287655 | orchestrator | 2026-04-05 00:45:12.287659 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 00:45:12.287663 | orchestrator | Sunday 05 April 2026 00:45:11 +0000 (0:00:00.122) 0:00:43.241 ********** 2026-04-05 00:45:12.287667 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 00:45:12.287670 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 00:45:12.287674 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:45:12.287678 | orchestrator |  "sdb": { 2026-04-05 00:45:12.287682 | orchestrator |  "osd_lvm_uuid": "3bb92c70-c222-5380-a7bf-d21f250fcd2a" 2026-04-05 00:45:12.287686 | orchestrator |  }, 2026-04-05 00:45:12.287689 | orchestrator |  "sdc": { 2026-04-05 00:45:12.287693 | orchestrator |  "osd_lvm_uuid": "377d1900-3c05-5c55-820b-3d4ba19b512c" 2026-04-05 00:45:12.287697 | orchestrator |  } 2026-04-05 00:45:12.287700 | orchestrator |  }, 2026-04-05 00:45:12.287704 | orchestrator |  "lvm_volumes": [ 2026-04-05 00:45:12.287708 | orchestrator |  { 2026-04-05 00:45:12.287712 | orchestrator |  "data": "osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a", 2026-04-05 00:45:12.287715 | orchestrator |  "data_vg": "ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a" 2026-04-05 00:45:12.287719 | orchestrator |  }, 2026-04-05 00:45:12.287725 | orchestrator |  { 2026-04-05 00:45:12.287729 | orchestrator |  "data": "osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c", 2026-04-05 00:45:12.287733 | orchestrator |  "data_vg": "ceph-377d1900-3c05-5c55-820b-3d4ba19b512c" 2026-04-05 00:45:12.287736 | orchestrator |  } 2026-04-05 00:45:12.287740 | orchestrator |  ] 2026-04-05 00:45:12.287744 | orchestrator |  } 2026-04-05 00:45:12.287747 | orchestrator | } 2026-04-05 00:45:12.287751 | orchestrator | 2026-04-05 00:45:12.287755 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 00:45:12.287759 | orchestrator | Sunday 05 April 2026 00:45:11 +0000 (0:00:00.255) 0:00:43.497 ********** 2026-04-05 00:45:12.287762 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 00:45:12.287766 | orchestrator | 2026-04-05 00:45:12.287770 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:45:12.287773 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 00:45:12.287778 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 00:45:12.287782 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 00:45:12.287786 | orchestrator | 2026-04-05 00:45:12.287790 | orchestrator | 2026-04-05 00:45:12.287795 | orchestrator | 2026-04-05 00:45:12.287799 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:45:12.287803 | orchestrator | Sunday 05 April 2026 00:45:12 +0000 (0:00:00.835) 0:00:44.333 ********** 2026-04-05 00:45:12.287810 | orchestrator | =============================================================================== 2026-04-05 00:45:12.287815 | orchestrator | Write configuration file ------------------------------------------------ 4.59s 2026-04-05 00:45:12.287819 | orchestrator | Get initial list of available block devices ----------------------------- 1.34s 2026-04-05 00:45:12.287826 | orchestrator | Add known partitions to the list of available block devices ------------- 1.25s 2026-04-05 00:45:12.287831 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2026-04-05 00:45:12.287835 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.12s 2026-04-05 00:45:12.287840 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2026-04-05 00:45:12.287844 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-04-05 00:45:12.287854 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2026-04-05 00:45:12.287858 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-04-05 00:45:12.287863 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.74s 2026-04-05 00:45:12.287872 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-04-05 00:45:12.287876 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-04-05 00:45:12.287881 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.70s 2026-04-05 00:45:12.287888 | orchestrator | Print configuration data ------------------------------------------------ 0.67s 2026-04-05 00:45:12.569191 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-04-05 00:45:12.569301 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-04-05 00:45:12.569327 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-04-05 00:45:12.569345 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2026-04-05 00:45:12.569366 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.54s 2026-04-05 00:45:12.569388 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2026-04-05 00:45:34.226345 | orchestrator | 2026-04-05 00:45:34 | INFO  | Task 12c47c12-3915-4028-85f2-392cd689ad2a (sync inventory) is running in background. Output coming soon. 2026-04-05 00:46:06.999958 | orchestrator | 2026-04-05 00:45:35 | INFO  | Starting group_vars file reorganization 2026-04-05 00:46:07.000060 | orchestrator | 2026-04-05 00:45:35 | INFO  | Moved 0 file(s) to their respective directories 2026-04-05 00:46:07.000071 | orchestrator | 2026-04-05 00:45:35 | INFO  | Group_vars file reorganization completed 2026-04-05 00:46:07.000079 | orchestrator | 2026-04-05 00:45:38 | INFO  | Starting variable preparation from inventory 2026-04-05 00:46:07.000087 | orchestrator | 2026-04-05 00:45:41 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-05 00:46:07.000094 | orchestrator | 2026-04-05 00:45:41 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-05 00:46:07.000120 | orchestrator | 2026-04-05 00:45:41 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-05 00:46:07.000128 | orchestrator | 2026-04-05 00:45:41 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-05 00:46:07.000135 | orchestrator | 2026-04-05 00:45:41 | INFO  | Variable preparation completed 2026-04-05 00:46:07.000142 | orchestrator | 2026-04-05 00:45:43 | INFO  | Starting inventory overwrite handling 2026-04-05 00:46:07.000149 | orchestrator | 2026-04-05 00:45:43 | INFO  | Handling group overwrites in 99-overwrite 2026-04-05 00:46:07.000156 | orchestrator | 2026-04-05 00:45:43 | INFO  | Removing group frr:children from 60-generic 2026-04-05 00:46:07.000186 | orchestrator | 2026-04-05 00:45:43 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-05 00:46:07.000193 | orchestrator | 2026-04-05 00:45:43 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-05 00:46:07.000199 | orchestrator | 2026-04-05 00:45:43 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-05 00:46:07.000206 | orchestrator | 2026-04-05 00:45:43 | INFO  | Handling group overwrites in 20-roles 2026-04-05 00:46:07.000213 | orchestrator | 2026-04-05 00:45:43 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-05 00:46:07.000220 | orchestrator | 2026-04-05 00:45:43 | INFO  | Removed 5 group(s) in total 2026-04-05 00:46:07.000226 | orchestrator | 2026-04-05 00:45:43 | INFO  | Inventory overwrite handling completed 2026-04-05 00:46:07.000233 | orchestrator | 2026-04-05 00:45:44 | INFO  | Starting merge of inventory files 2026-04-05 00:46:07.000239 | orchestrator | 2026-04-05 00:45:44 | INFO  | Inventory files merged successfully 2026-04-05 00:46:07.000246 | orchestrator | 2026-04-05 00:45:49 | INFO  | Generating minified hosts file 2026-04-05 00:46:07.000252 | orchestrator | 2026-04-05 00:45:51 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-05 00:46:07.000259 | orchestrator | 2026-04-05 00:45:51 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-05 00:46:07.000266 | orchestrator | 2026-04-05 00:45:53 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-05 00:46:07.000272 | orchestrator | 2026-04-05 00:46:05 | INFO  | Successfully wrote ClusterShell configuration 2026-04-05 00:46:07.000278 | orchestrator | [master 9f0e0d2] 2026-04-05-00-46 2026-04-05 00:46:07.000286 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-05 00:46:07.000294 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-05 00:46:07.000301 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-05 00:46:07.000308 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-05 00:46:09.129096 | orchestrator | 2026-04-05 00:46:09 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-05 00:46:09.231598 | orchestrator | 2026-04-05 00:46:09 | INFO  | Task e713e659-e918-4737-92cb-937c9748cc3d (ceph-create-lvm-devices) was prepared for execution. 2026-04-05 00:46:09.231718 | orchestrator | 2026-04-05 00:46:09 | INFO  | It takes a moment until task e713e659-e918-4737-92cb-937c9748cc3d (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-05 00:46:22.844454 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 00:46:22.844635 | orchestrator | 2.16.14 2026-04-05 00:46:22.844658 | orchestrator | 2026-04-05 00:46:22.844671 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 00:46:22.844683 | orchestrator | 2026-04-05 00:46:22.844694 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:46:22.844706 | orchestrator | Sunday 05 April 2026 00:46:14 +0000 (0:00:00.275) 0:00:00.275 ********** 2026-04-05 00:46:22.844717 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 00:46:22.844728 | orchestrator | 2026-04-05 00:46:22.844739 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:46:22.844750 | orchestrator | Sunday 05 April 2026 00:46:15 +0000 (0:00:00.241) 0:00:00.516 ********** 2026-04-05 00:46:22.844761 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:22.844772 | orchestrator | 2026-04-05 00:46:22.844783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.844793 | orchestrator | Sunday 05 April 2026 00:46:15 +0000 (0:00:00.254) 0:00:00.771 ********** 2026-04-05 00:46:22.844832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:46:22.844843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:46:22.844854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:46:22.844864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:46:22.844875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:46:22.844886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:46:22.844897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:46:22.844908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:46:22.844918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-05 00:46:22.844929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:46:22.844940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:46:22.844950 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:46:22.844961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:46:22.844971 | orchestrator | 2026-04-05 00:46:22.844983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.844996 | orchestrator | Sunday 05 April 2026 00:46:15 +0000 (0:00:00.438) 0:00:01.210 ********** 2026-04-05 00:46:22.845010 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.845023 | orchestrator | 2026-04-05 00:46:22.845036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845049 | orchestrator | Sunday 05 April 2026 00:46:16 +0000 (0:00:00.500) 0:00:01.711 ********** 2026-04-05 00:46:22.845062 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.845075 | orchestrator | 2026-04-05 00:46:22.845088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845102 | orchestrator | Sunday 05 April 2026 00:46:16 +0000 (0:00:00.207) 0:00:01.918 ********** 2026-04-05 00:46:22.845132 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.845143 | orchestrator | 2026-04-05 00:46:22.845154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845165 | orchestrator | Sunday 05 April 2026 00:46:16 +0000 (0:00:00.232) 0:00:02.151 ********** 2026-04-05 00:46:22.845195 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.845218 | orchestrator | 2026-04-05 00:46:22.845230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845240 | orchestrator | Sunday 05 April 2026 00:46:17 +0000 (0:00:00.191) 0:00:02.343 ********** 2026-04-05 00:46:22.845251 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.845262 | orchestrator | 2026-04-05 00:46:22.845273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845284 | orchestrator | Sunday 05 April 2026 00:46:17 +0000 (0:00:00.218) 0:00:02.562 ********** 2026-04-05 00:46:22.845294 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.845305 | orchestrator | 2026-04-05 00:46:22.845316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845327 | orchestrator | Sunday 05 April 2026 00:46:17 +0000 (0:00:00.176) 0:00:02.738 ********** 2026-04-05 00:46:22.845338 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.845349 | orchestrator | 2026-04-05 00:46:22.845359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845370 | orchestrator | Sunday 05 April 2026 00:46:17 +0000 (0:00:00.173) 0:00:02.911 ********** 2026-04-05 00:46:22.845381 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.845402 | orchestrator | 2026-04-05 00:46:22.845412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845423 | orchestrator | Sunday 05 April 2026 00:46:17 +0000 (0:00:00.209) 0:00:03.121 ********** 2026-04-05 00:46:22.845434 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5) 2026-04-05 00:46:22.845446 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5) 2026-04-05 00:46:22.845457 | orchestrator | 2026-04-05 00:46:22.845467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845519 | orchestrator | Sunday 05 April 2026 00:46:18 +0000 (0:00:00.439) 0:00:03.561 ********** 2026-04-05 00:46:22.845532 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772) 2026-04-05 00:46:22.845542 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772) 2026-04-05 00:46:22.845554 | orchestrator | 2026-04-05 00:46:22.845565 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845575 | orchestrator | Sunday 05 April 2026 00:46:18 +0000 (0:00:00.394) 0:00:03.955 ********** 2026-04-05 00:46:22.845586 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064) 2026-04-05 00:46:22.845597 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064) 2026-04-05 00:46:22.845607 | orchestrator | 2026-04-05 00:46:22.845618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845629 | orchestrator | Sunday 05 April 2026 00:46:19 +0000 (0:00:00.652) 0:00:04.608 ********** 2026-04-05 00:46:22.845640 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e) 2026-04-05 00:46:22.845650 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e) 2026-04-05 00:46:22.845661 | orchestrator | 2026-04-05 00:46:22.845672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:22.845683 | orchestrator | Sunday 05 April 2026 00:46:20 +0000 (0:00:00.740) 0:00:05.348 ********** 2026-04-05 00:46:22.845693 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:46:22.845704 | orchestrator | 2026-04-05 00:46:22.845715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:22.845732 | orchestrator | Sunday 05 April 2026 00:46:20 +0000 (0:00:00.836) 0:00:06.184 ********** 2026-04-05 00:46:22.845743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:46:22.845754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:46:22.845765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:46:22.845775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:46:22.845786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:46:22.845797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:46:22.845807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:46:22.845818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:46:22.845828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-05 00:46:22.845839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:46:22.845850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:46:22.845860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:46:22.845878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:46:22.845889 | orchestrator | 2026-04-05 00:46:22.845900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:22.845910 | orchestrator | Sunday 05 April 2026 00:46:21 +0000 (0:00:00.447) 0:00:06.631 ********** 2026-04-05 00:46:22.845921 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.845932 | orchestrator | 2026-04-05 00:46:22.845943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:22.845953 | orchestrator | Sunday 05 April 2026 00:46:21 +0000 (0:00:00.203) 0:00:06.835 ********** 2026-04-05 00:46:22.845964 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.845975 | orchestrator | 2026-04-05 00:46:22.845986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:22.845996 | orchestrator | Sunday 05 April 2026 00:46:21 +0000 (0:00:00.209) 0:00:07.045 ********** 2026-04-05 00:46:22.846007 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.846086 | orchestrator | 2026-04-05 00:46:22.846100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:22.846111 | orchestrator | Sunday 05 April 2026 00:46:21 +0000 (0:00:00.229) 0:00:07.274 ********** 2026-04-05 00:46:22.846122 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.846133 | orchestrator | 2026-04-05 00:46:22.846144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:22.846155 | orchestrator | Sunday 05 April 2026 00:46:22 +0000 (0:00:00.247) 0:00:07.522 ********** 2026-04-05 00:46:22.846165 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.846176 | orchestrator | 2026-04-05 00:46:22.846187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:22.846198 | orchestrator | Sunday 05 April 2026 00:46:22 +0000 (0:00:00.204) 0:00:07.726 ********** 2026-04-05 00:46:22.846209 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.846220 | orchestrator | 2026-04-05 00:46:22.846230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:22.846242 | orchestrator | Sunday 05 April 2026 00:46:22 +0000 (0:00:00.205) 0:00:07.932 ********** 2026-04-05 00:46:22.846252 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:22.846263 | orchestrator | 2026-04-05 00:46:22.846282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:31.217866 | orchestrator | Sunday 05 April 2026 00:46:22 +0000 (0:00:00.239) 0:00:08.172 ********** 2026-04-05 00:46:31.217962 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.217973 | orchestrator | 2026-04-05 00:46:31.217981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:31.217988 | orchestrator | Sunday 05 April 2026 00:46:23 +0000 (0:00:00.211) 0:00:08.383 ********** 2026-04-05 00:46:31.217995 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-05 00:46:31.218002 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-05 00:46:31.218010 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-05 00:46:31.218057 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-05 00:46:31.218065 | orchestrator | 2026-04-05 00:46:31.218073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:31.218080 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:01.156) 0:00:09.540 ********** 2026-04-05 00:46:31.218086 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218093 | orchestrator | 2026-04-05 00:46:31.218100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:31.218107 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.226) 0:00:09.766 ********** 2026-04-05 00:46:31.218113 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218120 | orchestrator | 2026-04-05 00:46:31.218126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:31.218155 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.270) 0:00:10.036 ********** 2026-04-05 00:46:31.218162 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218169 | orchestrator | 2026-04-05 00:46:31.218176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:31.218183 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.204) 0:00:10.241 ********** 2026-04-05 00:46:31.218190 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218198 | orchestrator | 2026-04-05 00:46:31.218205 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 00:46:31.218213 | orchestrator | Sunday 05 April 2026 00:46:25 +0000 (0:00:00.207) 0:00:10.449 ********** 2026-04-05 00:46:31.218220 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218227 | orchestrator | 2026-04-05 00:46:31.218235 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 00:46:31.218244 | orchestrator | Sunday 05 April 2026 00:46:25 +0000 (0:00:00.166) 0:00:10.615 ********** 2026-04-05 00:46:31.218253 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd7e6aba-230a-5307-afd3-3b474950d4d0'}}) 2026-04-05 00:46:31.218262 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ffa9e237-b4c6-554d-9530-d8db42979c07'}}) 2026-04-05 00:46:31.218270 | orchestrator | 2026-04-05 00:46:31.218278 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 00:46:31.218286 | orchestrator | Sunday 05 April 2026 00:46:25 +0000 (0:00:00.230) 0:00:10.846 ********** 2026-04-05 00:46:31.218294 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'}) 2026-04-05 00:46:31.218306 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'}) 2026-04-05 00:46:31.218314 | orchestrator | 2026-04-05 00:46:31.218323 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 00:46:31.218330 | orchestrator | Sunday 05 April 2026 00:46:27 +0000 (0:00:02.039) 0:00:12.886 ********** 2026-04-05 00:46:31.218338 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:31.218362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:31.218370 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218378 | orchestrator | 2026-04-05 00:46:31.218385 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 00:46:31.218392 | orchestrator | Sunday 05 April 2026 00:46:27 +0000 (0:00:00.162) 0:00:13.048 ********** 2026-04-05 00:46:31.218399 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'}) 2026-04-05 00:46:31.218406 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'}) 2026-04-05 00:46:31.218413 | orchestrator | 2026-04-05 00:46:31.218420 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 00:46:31.218426 | orchestrator | Sunday 05 April 2026 00:46:29 +0000 (0:00:01.443) 0:00:14.492 ********** 2026-04-05 00:46:31.218432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:31.218438 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:31.218445 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218452 | orchestrator | 2026-04-05 00:46:31.218460 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 00:46:31.218478 | orchestrator | Sunday 05 April 2026 00:46:29 +0000 (0:00:00.143) 0:00:14.636 ********** 2026-04-05 00:46:31.218532 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218541 | orchestrator | 2026-04-05 00:46:31.218549 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 00:46:31.218557 | orchestrator | Sunday 05 April 2026 00:46:29 +0000 (0:00:00.140) 0:00:14.777 ********** 2026-04-05 00:46:31.218564 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:31.218572 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:31.218579 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218586 | orchestrator | 2026-04-05 00:46:31.218593 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 00:46:31.218601 | orchestrator | Sunday 05 April 2026 00:46:29 +0000 (0:00:00.411) 0:00:15.188 ********** 2026-04-05 00:46:31.218608 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218615 | orchestrator | 2026-04-05 00:46:31.218622 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 00:46:31.218629 | orchestrator | Sunday 05 April 2026 00:46:29 +0000 (0:00:00.139) 0:00:15.327 ********** 2026-04-05 00:46:31.218637 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:31.218645 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:31.218652 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218660 | orchestrator | 2026-04-05 00:46:31.218672 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 00:46:31.218680 | orchestrator | Sunday 05 April 2026 00:46:30 +0000 (0:00:00.147) 0:00:15.475 ********** 2026-04-05 00:46:31.218687 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218694 | orchestrator | 2026-04-05 00:46:31.218701 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 00:46:31.218708 | orchestrator | Sunday 05 April 2026 00:46:30 +0000 (0:00:00.145) 0:00:15.621 ********** 2026-04-05 00:46:31.218714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:31.218721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:31.218728 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218736 | orchestrator | 2026-04-05 00:46:31.218743 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 00:46:31.218750 | orchestrator | Sunday 05 April 2026 00:46:30 +0000 (0:00:00.152) 0:00:15.773 ********** 2026-04-05 00:46:31.218758 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:31.218765 | orchestrator | 2026-04-05 00:46:31.218772 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 00:46:31.218779 | orchestrator | Sunday 05 April 2026 00:46:30 +0000 (0:00:00.155) 0:00:15.929 ********** 2026-04-05 00:46:31.218787 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:31.218794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:31.218802 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218809 | orchestrator | 2026-04-05 00:46:31.218816 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 00:46:31.218830 | orchestrator | Sunday 05 April 2026 00:46:30 +0000 (0:00:00.155) 0:00:16.085 ********** 2026-04-05 00:46:31.218838 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:31.218845 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:31.218852 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218859 | orchestrator | 2026-04-05 00:46:31.218867 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 00:46:31.218874 | orchestrator | Sunday 05 April 2026 00:46:30 +0000 (0:00:00.158) 0:00:16.244 ********** 2026-04-05 00:46:31.218881 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:31.218888 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:31.218896 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218903 | orchestrator | 2026-04-05 00:46:31.218910 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 00:46:31.218917 | orchestrator | Sunday 05 April 2026 00:46:31 +0000 (0:00:00.168) 0:00:16.412 ********** 2026-04-05 00:46:31.218925 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:31.218932 | orchestrator | 2026-04-05 00:46:31.218940 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 00:46:31.218953 | orchestrator | Sunday 05 April 2026 00:46:31 +0000 (0:00:00.137) 0:00:16.549 ********** 2026-04-05 00:46:37.698115 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.698235 | orchestrator | 2026-04-05 00:46:37.698256 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 00:46:37.698270 | orchestrator | Sunday 05 April 2026 00:46:31 +0000 (0:00:00.141) 0:00:16.691 ********** 2026-04-05 00:46:37.698279 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.698287 | orchestrator | 2026-04-05 00:46:37.698296 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 00:46:37.698306 | orchestrator | Sunday 05 April 2026 00:46:31 +0000 (0:00:00.145) 0:00:16.836 ********** 2026-04-05 00:46:37.698319 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:46:37.698334 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 00:46:37.698346 | orchestrator | } 2026-04-05 00:46:37.698362 | orchestrator | 2026-04-05 00:46:37.698375 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 00:46:37.698386 | orchestrator | Sunday 05 April 2026 00:46:31 +0000 (0:00:00.373) 0:00:17.210 ********** 2026-04-05 00:46:37.698397 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:46:37.698409 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 00:46:37.698420 | orchestrator | } 2026-04-05 00:46:37.698433 | orchestrator | 2026-04-05 00:46:37.698447 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 00:46:37.698459 | orchestrator | Sunday 05 April 2026 00:46:32 +0000 (0:00:00.153) 0:00:17.363 ********** 2026-04-05 00:46:37.698472 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:46:37.698484 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 00:46:37.698573 | orchestrator | } 2026-04-05 00:46:37.698590 | orchestrator | 2026-04-05 00:46:37.698603 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 00:46:37.698616 | orchestrator | Sunday 05 April 2026 00:46:32 +0000 (0:00:00.151) 0:00:17.514 ********** 2026-04-05 00:46:37.698629 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:37.698641 | orchestrator | 2026-04-05 00:46:37.698657 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 00:46:37.698669 | orchestrator | Sunday 05 April 2026 00:46:32 +0000 (0:00:00.653) 0:00:18.167 ********** 2026-04-05 00:46:37.698713 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:37.698725 | orchestrator | 2026-04-05 00:46:37.698737 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 00:46:37.698748 | orchestrator | Sunday 05 April 2026 00:46:33 +0000 (0:00:00.476) 0:00:18.644 ********** 2026-04-05 00:46:37.698759 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:37.698770 | orchestrator | 2026-04-05 00:46:37.698782 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 00:46:37.698795 | orchestrator | Sunday 05 April 2026 00:46:33 +0000 (0:00:00.519) 0:00:19.164 ********** 2026-04-05 00:46:37.698807 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:37.698820 | orchestrator | 2026-04-05 00:46:37.698833 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 00:46:37.698846 | orchestrator | Sunday 05 April 2026 00:46:33 +0000 (0:00:00.163) 0:00:19.328 ********** 2026-04-05 00:46:37.698859 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.698872 | orchestrator | 2026-04-05 00:46:37.698884 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 00:46:37.698898 | orchestrator | Sunday 05 April 2026 00:46:34 +0000 (0:00:00.124) 0:00:19.452 ********** 2026-04-05 00:46:37.698912 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.698926 | orchestrator | 2026-04-05 00:46:37.698939 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 00:46:37.698952 | orchestrator | Sunday 05 April 2026 00:46:34 +0000 (0:00:00.102) 0:00:19.554 ********** 2026-04-05 00:46:37.698966 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:46:37.698980 | orchestrator |  "vgs_report": { 2026-04-05 00:46:37.698994 | orchestrator |  "vg": [] 2026-04-05 00:46:37.699003 | orchestrator |  } 2026-04-05 00:46:37.699011 | orchestrator | } 2026-04-05 00:46:37.699019 | orchestrator | 2026-04-05 00:46:37.699026 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 00:46:37.699034 | orchestrator | Sunday 05 April 2026 00:46:34 +0000 (0:00:00.141) 0:00:19.696 ********** 2026-04-05 00:46:37.699042 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699050 | orchestrator | 2026-04-05 00:46:37.699058 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 00:46:37.699066 | orchestrator | Sunday 05 April 2026 00:46:34 +0000 (0:00:00.158) 0:00:19.854 ********** 2026-04-05 00:46:37.699074 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699082 | orchestrator | 2026-04-05 00:46:37.699090 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 00:46:37.699098 | orchestrator | Sunday 05 April 2026 00:46:34 +0000 (0:00:00.156) 0:00:20.011 ********** 2026-04-05 00:46:37.699106 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699113 | orchestrator | 2026-04-05 00:46:37.699121 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 00:46:37.699129 | orchestrator | Sunday 05 April 2026 00:46:34 +0000 (0:00:00.122) 0:00:20.133 ********** 2026-04-05 00:46:37.699137 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699144 | orchestrator | 2026-04-05 00:46:37.699152 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 00:46:37.699160 | orchestrator | Sunday 05 April 2026 00:46:35 +0000 (0:00:00.360) 0:00:20.494 ********** 2026-04-05 00:46:37.699168 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699176 | orchestrator | 2026-04-05 00:46:37.699183 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 00:46:37.699191 | orchestrator | Sunday 05 April 2026 00:46:35 +0000 (0:00:00.149) 0:00:20.644 ********** 2026-04-05 00:46:37.699199 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699207 | orchestrator | 2026-04-05 00:46:37.699214 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 00:46:37.699222 | orchestrator | Sunday 05 April 2026 00:46:35 +0000 (0:00:00.140) 0:00:20.784 ********** 2026-04-05 00:46:37.699230 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699248 | orchestrator | 2026-04-05 00:46:37.699256 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 00:46:37.699270 | orchestrator | Sunday 05 April 2026 00:46:35 +0000 (0:00:00.174) 0:00:20.959 ********** 2026-04-05 00:46:37.699306 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699319 | orchestrator | 2026-04-05 00:46:37.699349 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 00:46:37.699362 | orchestrator | Sunday 05 April 2026 00:46:35 +0000 (0:00:00.139) 0:00:21.099 ********** 2026-04-05 00:46:37.699374 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699386 | orchestrator | 2026-04-05 00:46:37.699398 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 00:46:37.699410 | orchestrator | Sunday 05 April 2026 00:46:35 +0000 (0:00:00.141) 0:00:21.241 ********** 2026-04-05 00:46:37.699423 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699436 | orchestrator | 2026-04-05 00:46:37.699448 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 00:46:37.699460 | orchestrator | Sunday 05 April 2026 00:46:36 +0000 (0:00:00.140) 0:00:21.381 ********** 2026-04-05 00:46:37.699473 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699486 | orchestrator | 2026-04-05 00:46:37.699523 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 00:46:37.699537 | orchestrator | Sunday 05 April 2026 00:46:36 +0000 (0:00:00.137) 0:00:21.518 ********** 2026-04-05 00:46:37.699549 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699562 | orchestrator | 2026-04-05 00:46:37.699576 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 00:46:37.699587 | orchestrator | Sunday 05 April 2026 00:46:36 +0000 (0:00:00.138) 0:00:21.657 ********** 2026-04-05 00:46:37.699600 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699613 | orchestrator | 2026-04-05 00:46:37.699625 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 00:46:37.699637 | orchestrator | Sunday 05 April 2026 00:46:36 +0000 (0:00:00.148) 0:00:21.806 ********** 2026-04-05 00:46:37.699649 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699662 | orchestrator | 2026-04-05 00:46:37.699683 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 00:46:37.699698 | orchestrator | Sunday 05 April 2026 00:46:36 +0000 (0:00:00.146) 0:00:21.952 ********** 2026-04-05 00:46:37.699713 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:37.699728 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:37.699738 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699746 | orchestrator | 2026-04-05 00:46:37.699753 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 00:46:37.699761 | orchestrator | Sunday 05 April 2026 00:46:36 +0000 (0:00:00.155) 0:00:22.108 ********** 2026-04-05 00:46:37.699773 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:37.699786 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:37.699799 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699812 | orchestrator | 2026-04-05 00:46:37.699825 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 00:46:37.699838 | orchestrator | Sunday 05 April 2026 00:46:37 +0000 (0:00:00.344) 0:00:22.452 ********** 2026-04-05 00:46:37.699852 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:37.699865 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:37.699890 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699903 | orchestrator | 2026-04-05 00:46:37.699912 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 00:46:37.699919 | orchestrator | Sunday 05 April 2026 00:46:37 +0000 (0:00:00.161) 0:00:22.613 ********** 2026-04-05 00:46:37.699927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:37.699935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:37.699943 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699951 | orchestrator | 2026-04-05 00:46:37.699959 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 00:46:37.699966 | orchestrator | Sunday 05 April 2026 00:46:37 +0000 (0:00:00.181) 0:00:22.795 ********** 2026-04-05 00:46:37.699974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:37.699982 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:37.699990 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:37.699997 | orchestrator | 2026-04-05 00:46:37.700005 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 00:46:37.700013 | orchestrator | Sunday 05 April 2026 00:46:37 +0000 (0:00:00.167) 0:00:22.963 ********** 2026-04-05 00:46:37.700033 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:43.021845 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:43.021956 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:43.021973 | orchestrator | 2026-04-05 00:46:43.021986 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 00:46:43.021999 | orchestrator | Sunday 05 April 2026 00:46:37 +0000 (0:00:00.151) 0:00:23.114 ********** 2026-04-05 00:46:43.022010 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:43.022092 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:43.022111 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:43.022138 | orchestrator | 2026-04-05 00:46:43.022162 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 00:46:43.022181 | orchestrator | Sunday 05 April 2026 00:46:37 +0000 (0:00:00.175) 0:00:23.290 ********** 2026-04-05 00:46:43.022198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:43.022239 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:43.022257 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:43.022275 | orchestrator | 2026-04-05 00:46:43.022292 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 00:46:43.022311 | orchestrator | Sunday 05 April 2026 00:46:38 +0000 (0:00:00.146) 0:00:23.436 ********** 2026-04-05 00:46:43.022330 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:43.022350 | orchestrator | 2026-04-05 00:46:43.022395 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 00:46:43.022418 | orchestrator | Sunday 05 April 2026 00:46:38 +0000 (0:00:00.528) 0:00:23.964 ********** 2026-04-05 00:46:43.022440 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:43.022461 | orchestrator | 2026-04-05 00:46:43.022481 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 00:46:43.022528 | orchestrator | Sunday 05 April 2026 00:46:39 +0000 (0:00:00.524) 0:00:24.489 ********** 2026-04-05 00:46:43.022549 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:43.022568 | orchestrator | 2026-04-05 00:46:43.022586 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 00:46:43.022605 | orchestrator | Sunday 05 April 2026 00:46:39 +0000 (0:00:00.158) 0:00:24.648 ********** 2026-04-05 00:46:43.022625 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'vg_name': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'}) 2026-04-05 00:46:43.022647 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'vg_name': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'}) 2026-04-05 00:46:43.022665 | orchestrator | 2026-04-05 00:46:43.022685 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 00:46:43.022697 | orchestrator | Sunday 05 April 2026 00:46:39 +0000 (0:00:00.175) 0:00:24.824 ********** 2026-04-05 00:46:43.022708 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:43.022719 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:43.022730 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:43.022741 | orchestrator | 2026-04-05 00:46:43.022752 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 00:46:43.022763 | orchestrator | Sunday 05 April 2026 00:46:39 +0000 (0:00:00.172) 0:00:24.996 ********** 2026-04-05 00:46:43.022773 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:43.022784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:43.022795 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:43.022805 | orchestrator | 2026-04-05 00:46:43.022816 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 00:46:43.022827 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.367) 0:00:25.363 ********** 2026-04-05 00:46:43.022837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'})  2026-04-05 00:46:43.022848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'})  2026-04-05 00:46:43.022859 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:43.022870 | orchestrator | 2026-04-05 00:46:43.022880 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 00:46:43.022891 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.159) 0:00:25.522 ********** 2026-04-05 00:46:43.022926 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:46:43.022938 | orchestrator |  "lvm_report": { 2026-04-05 00:46:43.022949 | orchestrator |  "lv": [ 2026-04-05 00:46:43.022960 | orchestrator |  { 2026-04-05 00:46:43.022971 | orchestrator |  "lv_name": "osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0", 2026-04-05 00:46:43.022983 | orchestrator |  "vg_name": "ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0" 2026-04-05 00:46:43.022994 | orchestrator |  }, 2026-04-05 00:46:43.023016 | orchestrator |  { 2026-04-05 00:46:43.023027 | orchestrator |  "lv_name": "osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07", 2026-04-05 00:46:43.023038 | orchestrator |  "vg_name": "ceph-ffa9e237-b4c6-554d-9530-d8db42979c07" 2026-04-05 00:46:43.023049 | orchestrator |  } 2026-04-05 00:46:43.023059 | orchestrator |  ], 2026-04-05 00:46:43.023069 | orchestrator |  "pv": [ 2026-04-05 00:46:43.023080 | orchestrator |  { 2026-04-05 00:46:43.023090 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 00:46:43.023101 | orchestrator |  "vg_name": "ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0" 2026-04-05 00:46:43.023112 | orchestrator |  }, 2026-04-05 00:46:43.023122 | orchestrator |  { 2026-04-05 00:46:43.023133 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 00:46:43.023144 | orchestrator |  "vg_name": "ceph-ffa9e237-b4c6-554d-9530-d8db42979c07" 2026-04-05 00:46:43.023154 | orchestrator |  } 2026-04-05 00:46:43.023165 | orchestrator |  ] 2026-04-05 00:46:43.023176 | orchestrator |  } 2026-04-05 00:46:43.023187 | orchestrator | } 2026-04-05 00:46:43.023197 | orchestrator | 2026-04-05 00:46:43.023208 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 00:46:43.023219 | orchestrator | 2026-04-05 00:46:43.023230 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:46:43.023241 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.288) 0:00:25.811 ********** 2026-04-05 00:46:43.023251 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 00:46:43.023262 | orchestrator | 2026-04-05 00:46:43.023273 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:46:43.023284 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.248) 0:00:26.059 ********** 2026-04-05 00:46:43.023295 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:43.023306 | orchestrator | 2026-04-05 00:46:43.023316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:43.023327 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.252) 0:00:26.312 ********** 2026-04-05 00:46:43.023338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:46:43.023348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:46:43.023359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:46:43.023369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:46:43.023380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:46:43.023390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:46:43.023401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:46:43.023411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:46:43.023422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-05 00:46:43.023444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:46:43.023455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:46:43.023466 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:46:43.023477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:46:43.023487 | orchestrator | 2026-04-05 00:46:43.023529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:43.023549 | orchestrator | Sunday 05 April 2026 00:46:41 +0000 (0:00:00.415) 0:00:26.727 ********** 2026-04-05 00:46:43.023568 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.023598 | orchestrator | 2026-04-05 00:46:43.023610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:43.023620 | orchestrator | Sunday 05 April 2026 00:46:41 +0000 (0:00:00.213) 0:00:26.941 ********** 2026-04-05 00:46:43.023631 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.023641 | orchestrator | 2026-04-05 00:46:43.023652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:43.023663 | orchestrator | Sunday 05 April 2026 00:46:41 +0000 (0:00:00.181) 0:00:27.122 ********** 2026-04-05 00:46:43.023673 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.023684 | orchestrator | 2026-04-05 00:46:43.023694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:43.023705 | orchestrator | Sunday 05 April 2026 00:46:41 +0000 (0:00:00.187) 0:00:27.310 ********** 2026-04-05 00:46:43.023715 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.023729 | orchestrator | 2026-04-05 00:46:43.023748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:43.023766 | orchestrator | Sunday 05 April 2026 00:46:42 +0000 (0:00:00.648) 0:00:27.959 ********** 2026-04-05 00:46:43.023784 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.023802 | orchestrator | 2026-04-05 00:46:43.023820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:43.023838 | orchestrator | Sunday 05 April 2026 00:46:42 +0000 (0:00:00.204) 0:00:28.164 ********** 2026-04-05 00:46:43.023856 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.023875 | orchestrator | 2026-04-05 00:46:43.023906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:53.370606 | orchestrator | Sunday 05 April 2026 00:46:43 +0000 (0:00:00.187) 0:00:28.351 ********** 2026-04-05 00:46:53.370717 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.370746 | orchestrator | 2026-04-05 00:46:53.370760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:53.370771 | orchestrator | Sunday 05 April 2026 00:46:43 +0000 (0:00:00.210) 0:00:28.562 ********** 2026-04-05 00:46:53.370782 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.370793 | orchestrator | 2026-04-05 00:46:53.370804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:53.370815 | orchestrator | Sunday 05 April 2026 00:46:43 +0000 (0:00:00.214) 0:00:28.776 ********** 2026-04-05 00:46:53.370826 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3) 2026-04-05 00:46:53.370837 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3) 2026-04-05 00:46:53.370848 | orchestrator | 2026-04-05 00:46:53.370885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:53.370899 | orchestrator | Sunday 05 April 2026 00:46:43 +0000 (0:00:00.425) 0:00:29.201 ********** 2026-04-05 00:46:53.370910 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654) 2026-04-05 00:46:53.370920 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654) 2026-04-05 00:46:53.370931 | orchestrator | 2026-04-05 00:46:53.370967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:53.370979 | orchestrator | Sunday 05 April 2026 00:46:44 +0000 (0:00:00.442) 0:00:29.644 ********** 2026-04-05 00:46:53.370990 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0) 2026-04-05 00:46:53.371001 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0) 2026-04-05 00:46:53.371012 | orchestrator | 2026-04-05 00:46:53.371023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:53.371034 | orchestrator | Sunday 05 April 2026 00:46:44 +0000 (0:00:00.432) 0:00:30.076 ********** 2026-04-05 00:46:53.371044 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637) 2026-04-05 00:46:53.371080 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637) 2026-04-05 00:46:53.371091 | orchestrator | 2026-04-05 00:46:53.371102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:53.371126 | orchestrator | Sunday 05 April 2026 00:46:45 +0000 (0:00:00.422) 0:00:30.498 ********** 2026-04-05 00:46:53.371137 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:46:53.371148 | orchestrator | 2026-04-05 00:46:53.371158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371169 | orchestrator | Sunday 05 April 2026 00:46:45 +0000 (0:00:00.355) 0:00:30.854 ********** 2026-04-05 00:46:53.371180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:46:53.371191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:46:53.371202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:46:53.371212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:46:53.371223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:46:53.371233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:46:53.371244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:46:53.371255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:46:53.371266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-05 00:46:53.371276 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:46:53.371287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:46:53.371311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:46:53.371322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:46:53.371344 | orchestrator | 2026-04-05 00:46:53.371355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371366 | orchestrator | Sunday 05 April 2026 00:46:46 +0000 (0:00:00.640) 0:00:31.494 ********** 2026-04-05 00:46:53.371377 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.371388 | orchestrator | 2026-04-05 00:46:53.371398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371409 | orchestrator | Sunday 05 April 2026 00:46:46 +0000 (0:00:00.209) 0:00:31.704 ********** 2026-04-05 00:46:53.371420 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.371430 | orchestrator | 2026-04-05 00:46:53.371441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371452 | orchestrator | Sunday 05 April 2026 00:46:46 +0000 (0:00:00.218) 0:00:31.922 ********** 2026-04-05 00:46:53.371480 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.371491 | orchestrator | 2026-04-05 00:46:53.371540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371552 | orchestrator | Sunday 05 April 2026 00:46:46 +0000 (0:00:00.227) 0:00:32.150 ********** 2026-04-05 00:46:53.371563 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.371574 | orchestrator | 2026-04-05 00:46:53.371584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371595 | orchestrator | Sunday 05 April 2026 00:46:47 +0000 (0:00:00.195) 0:00:32.345 ********** 2026-04-05 00:46:53.371606 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.371626 | orchestrator | 2026-04-05 00:46:53.371643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371662 | orchestrator | Sunday 05 April 2026 00:46:47 +0000 (0:00:00.192) 0:00:32.537 ********** 2026-04-05 00:46:53.371673 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.371684 | orchestrator | 2026-04-05 00:46:53.371694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371705 | orchestrator | Sunday 05 April 2026 00:46:47 +0000 (0:00:00.190) 0:00:32.728 ********** 2026-04-05 00:46:53.371716 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.371727 | orchestrator | 2026-04-05 00:46:53.371738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371748 | orchestrator | Sunday 05 April 2026 00:46:47 +0000 (0:00:00.195) 0:00:32.923 ********** 2026-04-05 00:46:53.371759 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.371769 | orchestrator | 2026-04-05 00:46:53.371780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371796 | orchestrator | Sunday 05 April 2026 00:46:47 +0000 (0:00:00.193) 0:00:33.116 ********** 2026-04-05 00:46:53.371807 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-05 00:46:53.371831 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-05 00:46:53.371842 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-05 00:46:53.371853 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-05 00:46:53.371864 | orchestrator | 2026-04-05 00:46:53.371874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371885 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:00.756) 0:00:33.873 ********** 2026-04-05 00:46:53.371896 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.371906 | orchestrator | 2026-04-05 00:46:53.371917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371928 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:00.180) 0:00:34.053 ********** 2026-04-05 00:46:53.371939 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.371949 | orchestrator | 2026-04-05 00:46:53.371960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.371971 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:00.199) 0:00:34.253 ********** 2026-04-05 00:46:53.371992 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.372003 | orchestrator | 2026-04-05 00:46:53.372014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:53.372024 | orchestrator | Sunday 05 April 2026 00:46:49 +0000 (0:00:00.526) 0:00:34.779 ********** 2026-04-05 00:46:53.372035 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.372046 | orchestrator | 2026-04-05 00:46:53.372057 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 00:46:53.372068 | orchestrator | Sunday 05 April 2026 00:46:49 +0000 (0:00:00.189) 0:00:34.969 ********** 2026-04-05 00:46:53.372079 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.372090 | orchestrator | 2026-04-05 00:46:53.372100 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 00:46:53.372111 | orchestrator | Sunday 05 April 2026 00:46:49 +0000 (0:00:00.115) 0:00:35.084 ********** 2026-04-05 00:46:53.372122 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c330a934-8550-546d-8551-a9ce4f4a4f0f'}}) 2026-04-05 00:46:53.372147 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '824ea9fd-8e44-5b08-9075-8333765a455e'}}) 2026-04-05 00:46:53.372158 | orchestrator | 2026-04-05 00:46:53.372169 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 00:46:53.372179 | orchestrator | Sunday 05 April 2026 00:46:49 +0000 (0:00:00.212) 0:00:35.296 ********** 2026-04-05 00:46:53.372191 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'}) 2026-04-05 00:46:53.372215 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'}) 2026-04-05 00:46:53.372233 | orchestrator | 2026-04-05 00:46:53.372244 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 00:46:53.372255 | orchestrator | Sunday 05 April 2026 00:46:51 +0000 (0:00:01.913) 0:00:37.210 ********** 2026-04-05 00:46:53.372266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:46:53.372277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:46:53.372288 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:53.372298 | orchestrator | 2026-04-05 00:46:53.372309 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 00:46:53.372319 | orchestrator | Sunday 05 April 2026 00:46:52 +0000 (0:00:00.167) 0:00:37.378 ********** 2026-04-05 00:46:53.372330 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'}) 2026-04-05 00:46:53.372362 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'}) 2026-04-05 00:46:58.742007 | orchestrator | 2026-04-05 00:46:58.742149 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 00:46:58.742166 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:01.387) 0:00:38.766 ********** 2026-04-05 00:46:58.742178 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:46:58.742191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:46:58.742201 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.742213 | orchestrator | 2026-04-05 00:46:58.742231 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 00:46:58.742250 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:00.164) 0:00:38.931 ********** 2026-04-05 00:46:58.742272 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.742299 | orchestrator | 2026-04-05 00:46:58.742319 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 00:46:58.742339 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:00.127) 0:00:39.058 ********** 2026-04-05 00:46:58.742359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:46:58.742379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:46:58.742400 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.742419 | orchestrator | 2026-04-05 00:46:58.742430 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 00:46:58.742441 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:00.151) 0:00:39.209 ********** 2026-04-05 00:46:58.742451 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.742462 | orchestrator | 2026-04-05 00:46:58.742473 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 00:46:58.742483 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:00.129) 0:00:39.339 ********** 2026-04-05 00:46:58.742494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:46:58.742555 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:46:58.742601 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.742622 | orchestrator | 2026-04-05 00:46:58.742643 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 00:46:58.742658 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:00.163) 0:00:39.503 ********** 2026-04-05 00:46:58.742670 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.742683 | orchestrator | 2026-04-05 00:46:58.742712 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 00:46:58.742725 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:00.295) 0:00:39.799 ********** 2026-04-05 00:46:58.742736 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:46:58.742747 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:46:58.742758 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.742768 | orchestrator | 2026-04-05 00:46:58.742779 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 00:46:58.742790 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:00.178) 0:00:39.977 ********** 2026-04-05 00:46:58.742801 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:58.742812 | orchestrator | 2026-04-05 00:46:58.742823 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 00:46:58.742834 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:00.141) 0:00:40.118 ********** 2026-04-05 00:46:58.742844 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:46:58.742855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:46:58.742866 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.742877 | orchestrator | 2026-04-05 00:46:58.742887 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 00:46:58.742898 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:00.160) 0:00:40.279 ********** 2026-04-05 00:46:58.742909 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:46:58.742920 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:46:58.742930 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.742941 | orchestrator | 2026-04-05 00:46:58.742952 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 00:46:58.742982 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.149) 0:00:40.428 ********** 2026-04-05 00:46:58.742994 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:46:58.743005 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:46:58.743015 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.743026 | orchestrator | 2026-04-05 00:46:58.743037 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 00:46:58.743048 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.152) 0:00:40.581 ********** 2026-04-05 00:46:58.743058 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.743069 | orchestrator | 2026-04-05 00:46:58.743080 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 00:46:58.743090 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.134) 0:00:40.716 ********** 2026-04-05 00:46:58.743108 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.743119 | orchestrator | 2026-04-05 00:46:58.743130 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 00:46:58.743145 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.129) 0:00:40.846 ********** 2026-04-05 00:46:58.743156 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.743167 | orchestrator | 2026-04-05 00:46:58.743178 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 00:46:58.743188 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.130) 0:00:40.977 ********** 2026-04-05 00:46:58.743199 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:46:58.743210 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 00:46:58.743221 | orchestrator | } 2026-04-05 00:46:58.743231 | orchestrator | 2026-04-05 00:46:58.743242 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 00:46:58.743252 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.134) 0:00:41.111 ********** 2026-04-05 00:46:58.743268 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:46:58.743286 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 00:46:58.743310 | orchestrator | } 2026-04-05 00:46:58.743332 | orchestrator | 2026-04-05 00:46:58.743350 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 00:46:58.743367 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.135) 0:00:41.247 ********** 2026-04-05 00:46:58.743385 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:46:58.743401 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 00:46:58.743420 | orchestrator | } 2026-04-05 00:46:58.743438 | orchestrator | 2026-04-05 00:46:58.743456 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 00:46:58.743475 | orchestrator | Sunday 05 April 2026 00:46:56 +0000 (0:00:00.117) 0:00:41.364 ********** 2026-04-05 00:46:58.743494 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:58.743553 | orchestrator | 2026-04-05 00:46:58.743573 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 00:46:58.743590 | orchestrator | Sunday 05 April 2026 00:46:56 +0000 (0:00:00.666) 0:00:42.030 ********** 2026-04-05 00:46:58.743608 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:58.743624 | orchestrator | 2026-04-05 00:46:58.743642 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 00:46:58.743660 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:00.532) 0:00:42.563 ********** 2026-04-05 00:46:58.743677 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:58.743695 | orchestrator | 2026-04-05 00:46:58.743712 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 00:46:58.743729 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:00.512) 0:00:43.075 ********** 2026-04-05 00:46:58.743746 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:58.743764 | orchestrator | 2026-04-05 00:46:58.743782 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 00:46:58.743800 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:00.134) 0:00:43.210 ********** 2026-04-05 00:46:58.743818 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.743835 | orchestrator | 2026-04-05 00:46:58.743852 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 00:46:58.743871 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:00.117) 0:00:43.327 ********** 2026-04-05 00:46:58.743888 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.743906 | orchestrator | 2026-04-05 00:46:58.743923 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 00:46:58.743940 | orchestrator | Sunday 05 April 2026 00:46:58 +0000 (0:00:00.105) 0:00:43.433 ********** 2026-04-05 00:46:58.743958 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:46:58.743975 | orchestrator |  "vgs_report": { 2026-04-05 00:46:58.743994 | orchestrator |  "vg": [] 2026-04-05 00:46:58.744012 | orchestrator |  } 2026-04-05 00:46:58.744029 | orchestrator | } 2026-04-05 00:46:58.744064 | orchestrator | 2026-04-05 00:46:58.744083 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 00:46:58.744100 | orchestrator | Sunday 05 April 2026 00:46:58 +0000 (0:00:00.132) 0:00:43.566 ********** 2026-04-05 00:46:58.744118 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.744137 | orchestrator | 2026-04-05 00:46:58.744154 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 00:46:58.744172 | orchestrator | Sunday 05 April 2026 00:46:58 +0000 (0:00:00.126) 0:00:43.693 ********** 2026-04-05 00:46:58.744190 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.744208 | orchestrator | 2026-04-05 00:46:58.744226 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 00:46:58.744244 | orchestrator | Sunday 05 April 2026 00:46:58 +0000 (0:00:00.126) 0:00:43.820 ********** 2026-04-05 00:46:58.744263 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.744280 | orchestrator | 2026-04-05 00:46:58.744300 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 00:46:58.744319 | orchestrator | Sunday 05 April 2026 00:46:58 +0000 (0:00:00.129) 0:00:43.949 ********** 2026-04-05 00:46:58.744337 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:58.744355 | orchestrator | 2026-04-05 00:46:58.744391 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 00:47:03.029905 | orchestrator | Sunday 05 April 2026 00:46:58 +0000 (0:00:00.123) 0:00:44.072 ********** 2026-04-05 00:47:03.029998 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030068 | orchestrator | 2026-04-05 00:47:03.030084 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 00:47:03.030096 | orchestrator | Sunday 05 April 2026 00:46:58 +0000 (0:00:00.124) 0:00:44.197 ********** 2026-04-05 00:47:03.030107 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030118 | orchestrator | 2026-04-05 00:47:03.030129 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 00:47:03.030139 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:00.263) 0:00:44.461 ********** 2026-04-05 00:47:03.030150 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030161 | orchestrator | 2026-04-05 00:47:03.030172 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 00:47:03.030182 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:00.113) 0:00:44.574 ********** 2026-04-05 00:47:03.030193 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030204 | orchestrator | 2026-04-05 00:47:03.030215 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 00:47:03.030225 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:00.128) 0:00:44.703 ********** 2026-04-05 00:47:03.030250 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030261 | orchestrator | 2026-04-05 00:47:03.030272 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 00:47:03.030283 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:00.138) 0:00:44.841 ********** 2026-04-05 00:47:03.030294 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030304 | orchestrator | 2026-04-05 00:47:03.030315 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 00:47:03.030326 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:00.123) 0:00:44.965 ********** 2026-04-05 00:47:03.030337 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030347 | orchestrator | 2026-04-05 00:47:03.030358 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 00:47:03.030369 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:00.132) 0:00:45.098 ********** 2026-04-05 00:47:03.030380 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030391 | orchestrator | 2026-04-05 00:47:03.030402 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 00:47:03.030412 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:00.132) 0:00:45.230 ********** 2026-04-05 00:47:03.030423 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030452 | orchestrator | 2026-04-05 00:47:03.030466 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 00:47:03.030480 | orchestrator | Sunday 05 April 2026 00:47:00 +0000 (0:00:00.129) 0:00:45.359 ********** 2026-04-05 00:47:03.030494 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030530 | orchestrator | 2026-04-05 00:47:03.030541 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 00:47:03.030552 | orchestrator | Sunday 05 April 2026 00:47:00 +0000 (0:00:00.122) 0:00:45.482 ********** 2026-04-05 00:47:03.030564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:47:03.030576 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:47:03.030588 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030599 | orchestrator | 2026-04-05 00:47:03.030610 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 00:47:03.030621 | orchestrator | Sunday 05 April 2026 00:47:00 +0000 (0:00:00.143) 0:00:45.625 ********** 2026-04-05 00:47:03.030631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:47:03.030642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:47:03.030653 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030664 | orchestrator | 2026-04-05 00:47:03.030675 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 00:47:03.030686 | orchestrator | Sunday 05 April 2026 00:47:00 +0000 (0:00:00.167) 0:00:45.792 ********** 2026-04-05 00:47:03.030696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:47:03.030707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:47:03.030718 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030729 | orchestrator | 2026-04-05 00:47:03.030740 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 00:47:03.030751 | orchestrator | Sunday 05 April 2026 00:47:00 +0000 (0:00:00.136) 0:00:45.929 ********** 2026-04-05 00:47:03.030762 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:47:03.030774 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:47:03.030785 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030795 | orchestrator | 2026-04-05 00:47:03.030823 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 00:47:03.030835 | orchestrator | Sunday 05 April 2026 00:47:00 +0000 (0:00:00.308) 0:00:46.237 ********** 2026-04-05 00:47:03.030846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:47:03.030857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:47:03.030868 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030879 | orchestrator | 2026-04-05 00:47:03.030890 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 00:47:03.030901 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:00.154) 0:00:46.391 ********** 2026-04-05 00:47:03.030921 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:47:03.030933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:47:03.030943 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.030954 | orchestrator | 2026-04-05 00:47:03.030965 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 00:47:03.030976 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:00.142) 0:00:46.534 ********** 2026-04-05 00:47:03.030987 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:47:03.030997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:47:03.031008 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.031019 | orchestrator | 2026-04-05 00:47:03.031030 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 00:47:03.031041 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:00.136) 0:00:46.671 ********** 2026-04-05 00:47:03.031052 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:47:03.031063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:47:03.031073 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.031084 | orchestrator | 2026-04-05 00:47:03.031095 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 00:47:03.031106 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:00.140) 0:00:46.811 ********** 2026-04-05 00:47:03.031117 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:47:03.031128 | orchestrator | 2026-04-05 00:47:03.031138 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 00:47:03.031149 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:00.507) 0:00:47.319 ********** 2026-04-05 00:47:03.031160 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:47:03.031171 | orchestrator | 2026-04-05 00:47:03.031181 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 00:47:03.031192 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:00.515) 0:00:47.834 ********** 2026-04-05 00:47:03.031203 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:47:03.031214 | orchestrator | 2026-04-05 00:47:03.031224 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 00:47:03.031235 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:00.143) 0:00:47.978 ********** 2026-04-05 00:47:03.031246 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'vg_name': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'}) 2026-04-05 00:47:03.031257 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'vg_name': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'}) 2026-04-05 00:47:03.031268 | orchestrator | 2026-04-05 00:47:03.031279 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 00:47:03.031290 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:00.172) 0:00:48.150 ********** 2026-04-05 00:47:03.031300 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:47:03.031344 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:47:03.031357 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:03.031374 | orchestrator | 2026-04-05 00:47:03.031385 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 00:47:03.031396 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:00.144) 0:00:48.295 ********** 2026-04-05 00:47:03.031407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:47:03.031426 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:47:08.806185 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:08.806282 | orchestrator | 2026-04-05 00:47:08.806306 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 00:47:08.806325 | orchestrator | Sunday 05 April 2026 00:47:03 +0000 (0:00:00.130) 0:00:48.425 ********** 2026-04-05 00:47:08.806342 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'})  2026-04-05 00:47:08.806358 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'})  2026-04-05 00:47:08.806376 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:08.806393 | orchestrator | 2026-04-05 00:47:08.806410 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 00:47:08.806426 | orchestrator | Sunday 05 April 2026 00:47:03 +0000 (0:00:00.140) 0:00:48.566 ********** 2026-04-05 00:47:08.806442 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:47:08.806461 | orchestrator |  "lvm_report": { 2026-04-05 00:47:08.806477 | orchestrator |  "lv": [ 2026-04-05 00:47:08.806553 | orchestrator |  { 2026-04-05 00:47:08.806576 | orchestrator |  "lv_name": "osd-block-824ea9fd-8e44-5b08-9075-8333765a455e", 2026-04-05 00:47:08.806594 | orchestrator |  "vg_name": "ceph-824ea9fd-8e44-5b08-9075-8333765a455e" 2026-04-05 00:47:08.806611 | orchestrator |  }, 2026-04-05 00:47:08.806621 | orchestrator |  { 2026-04-05 00:47:08.806631 | orchestrator |  "lv_name": "osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f", 2026-04-05 00:47:08.806640 | orchestrator |  "vg_name": "ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f" 2026-04-05 00:47:08.806650 | orchestrator |  } 2026-04-05 00:47:08.806659 | orchestrator |  ], 2026-04-05 00:47:08.806669 | orchestrator |  "pv": [ 2026-04-05 00:47:08.806679 | orchestrator |  { 2026-04-05 00:47:08.806689 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 00:47:08.806698 | orchestrator |  "vg_name": "ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f" 2026-04-05 00:47:08.806708 | orchestrator |  }, 2026-04-05 00:47:08.806718 | orchestrator |  { 2026-04-05 00:47:08.806727 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 00:47:08.806737 | orchestrator |  "vg_name": "ceph-824ea9fd-8e44-5b08-9075-8333765a455e" 2026-04-05 00:47:08.806747 | orchestrator |  } 2026-04-05 00:47:08.806757 | orchestrator |  ] 2026-04-05 00:47:08.806766 | orchestrator |  } 2026-04-05 00:47:08.806776 | orchestrator | } 2026-04-05 00:47:08.806786 | orchestrator | 2026-04-05 00:47:08.806795 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 00:47:08.806805 | orchestrator | 2026-04-05 00:47:08.806814 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:47:08.806824 | orchestrator | Sunday 05 April 2026 00:47:03 +0000 (0:00:00.434) 0:00:49.000 ********** 2026-04-05 00:47:08.806834 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 00:47:08.806844 | orchestrator | 2026-04-05 00:47:08.806854 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:47:08.806863 | orchestrator | Sunday 05 April 2026 00:47:03 +0000 (0:00:00.250) 0:00:49.251 ********** 2026-04-05 00:47:08.806891 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:08.806901 | orchestrator | 2026-04-05 00:47:08.806911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.806920 | orchestrator | Sunday 05 April 2026 00:47:04 +0000 (0:00:00.239) 0:00:49.491 ********** 2026-04-05 00:47:08.806930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:47:08.806939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:47:08.806949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:47:08.806962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:47:08.806971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:47:08.806979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:47:08.806987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:47:08.806995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:47:08.807003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-05 00:47:08.807010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:47:08.807018 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:47:08.807026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:47:08.807034 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:47:08.807041 | orchestrator | 2026-04-05 00:47:08.807049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807057 | orchestrator | Sunday 05 April 2026 00:47:04 +0000 (0:00:00.426) 0:00:49.917 ********** 2026-04-05 00:47:08.807065 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.807073 | orchestrator | 2026-04-05 00:47:08.807080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807088 | orchestrator | Sunday 05 April 2026 00:47:04 +0000 (0:00:00.194) 0:00:50.111 ********** 2026-04-05 00:47:08.807096 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.807104 | orchestrator | 2026-04-05 00:47:08.807112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807134 | orchestrator | Sunday 05 April 2026 00:47:04 +0000 (0:00:00.223) 0:00:50.335 ********** 2026-04-05 00:47:08.807142 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.807150 | orchestrator | 2026-04-05 00:47:08.807158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807166 | orchestrator | Sunday 05 April 2026 00:47:05 +0000 (0:00:00.218) 0:00:50.554 ********** 2026-04-05 00:47:08.807173 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.807181 | orchestrator | 2026-04-05 00:47:08.807189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807197 | orchestrator | Sunday 05 April 2026 00:47:05 +0000 (0:00:00.205) 0:00:50.760 ********** 2026-04-05 00:47:08.807204 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.807212 | orchestrator | 2026-04-05 00:47:08.807220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807228 | orchestrator | Sunday 05 April 2026 00:47:05 +0000 (0:00:00.229) 0:00:50.989 ********** 2026-04-05 00:47:08.807236 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.807244 | orchestrator | 2026-04-05 00:47:08.807251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807264 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.505) 0:00:51.494 ********** 2026-04-05 00:47:08.807272 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.807285 | orchestrator | 2026-04-05 00:47:08.807293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807301 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.172) 0:00:51.666 ********** 2026-04-05 00:47:08.807308 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.807316 | orchestrator | 2026-04-05 00:47:08.807324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807331 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.180) 0:00:51.847 ********** 2026-04-05 00:47:08.807339 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9) 2026-04-05 00:47:08.807347 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9) 2026-04-05 00:47:08.807355 | orchestrator | 2026-04-05 00:47:08.807363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807370 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.417) 0:00:52.265 ********** 2026-04-05 00:47:08.807378 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2) 2026-04-05 00:47:08.807386 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2) 2026-04-05 00:47:08.807393 | orchestrator | 2026-04-05 00:47:08.807401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807409 | orchestrator | Sunday 05 April 2026 00:47:07 +0000 (0:00:00.405) 0:00:52.671 ********** 2026-04-05 00:47:08.807417 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba) 2026-04-05 00:47:08.807425 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba) 2026-04-05 00:47:08.807432 | orchestrator | 2026-04-05 00:47:08.807440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807447 | orchestrator | Sunday 05 April 2026 00:47:07 +0000 (0:00:00.426) 0:00:53.097 ********** 2026-04-05 00:47:08.807455 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2) 2026-04-05 00:47:08.807463 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2) 2026-04-05 00:47:08.807471 | orchestrator | 2026-04-05 00:47:08.807478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:47:08.807486 | orchestrator | Sunday 05 April 2026 00:47:08 +0000 (0:00:00.424) 0:00:53.522 ********** 2026-04-05 00:47:08.807494 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:47:08.807533 | orchestrator | 2026-04-05 00:47:08.807541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:08.807549 | orchestrator | Sunday 05 April 2026 00:47:08 +0000 (0:00:00.298) 0:00:53.820 ********** 2026-04-05 00:47:08.807557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:47:08.807565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:47:08.807573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:47:08.807580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:47:08.807588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:47:08.807596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:47:08.807604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:47:08.807612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:47:08.807620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-05 00:47:08.807633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:47:08.807641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:47:08.807654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:47:17.979673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:47:17.979785 | orchestrator | 2026-04-05 00:47:17.979801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.979813 | orchestrator | Sunday 05 April 2026 00:47:08 +0000 (0:00:00.401) 0:00:54.222 ********** 2026-04-05 00:47:17.979825 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.979837 | orchestrator | 2026-04-05 00:47:17.979848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.979859 | orchestrator | Sunday 05 April 2026 00:47:09 +0000 (0:00:00.183) 0:00:54.405 ********** 2026-04-05 00:47:17.979870 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.979881 | orchestrator | 2026-04-05 00:47:17.979891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.979902 | orchestrator | Sunday 05 April 2026 00:47:09 +0000 (0:00:00.193) 0:00:54.598 ********** 2026-04-05 00:47:17.979913 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.979923 | orchestrator | 2026-04-05 00:47:17.979934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.979962 | orchestrator | Sunday 05 April 2026 00:47:09 +0000 (0:00:00.662) 0:00:55.261 ********** 2026-04-05 00:47:17.979973 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.979984 | orchestrator | 2026-04-05 00:47:17.979995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.980006 | orchestrator | Sunday 05 April 2026 00:47:10 +0000 (0:00:00.210) 0:00:55.471 ********** 2026-04-05 00:47:17.980016 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.980027 | orchestrator | 2026-04-05 00:47:17.980038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.980049 | orchestrator | Sunday 05 April 2026 00:47:10 +0000 (0:00:00.204) 0:00:55.676 ********** 2026-04-05 00:47:17.980059 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.980070 | orchestrator | 2026-04-05 00:47:17.980081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.980092 | orchestrator | Sunday 05 April 2026 00:47:10 +0000 (0:00:00.210) 0:00:55.887 ********** 2026-04-05 00:47:17.980102 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.980114 | orchestrator | 2026-04-05 00:47:17.980127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.980139 | orchestrator | Sunday 05 April 2026 00:47:10 +0000 (0:00:00.199) 0:00:56.086 ********** 2026-04-05 00:47:17.980155 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.980174 | orchestrator | 2026-04-05 00:47:17.980202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.980222 | orchestrator | Sunday 05 April 2026 00:47:10 +0000 (0:00:00.213) 0:00:56.300 ********** 2026-04-05 00:47:17.980240 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-05 00:47:17.980258 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-05 00:47:17.980275 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-05 00:47:17.980291 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-05 00:47:17.980306 | orchestrator | 2026-04-05 00:47:17.980321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.980336 | orchestrator | Sunday 05 April 2026 00:47:11 +0000 (0:00:00.735) 0:00:57.035 ********** 2026-04-05 00:47:17.980353 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.980370 | orchestrator | 2026-04-05 00:47:17.980387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.980432 | orchestrator | Sunday 05 April 2026 00:47:11 +0000 (0:00:00.261) 0:00:57.297 ********** 2026-04-05 00:47:17.980453 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.980470 | orchestrator | 2026-04-05 00:47:17.980490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.980537 | orchestrator | Sunday 05 April 2026 00:47:12 +0000 (0:00:00.217) 0:00:57.514 ********** 2026-04-05 00:47:17.980556 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.980575 | orchestrator | 2026-04-05 00:47:17.980588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:17.980598 | orchestrator | Sunday 05 April 2026 00:47:12 +0000 (0:00:00.246) 0:00:57.761 ********** 2026-04-05 00:47:17.980609 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.980620 | orchestrator | 2026-04-05 00:47:17.980630 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 00:47:17.980641 | orchestrator | Sunday 05 April 2026 00:47:12 +0000 (0:00:00.223) 0:00:57.984 ********** 2026-04-05 00:47:17.980652 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.980662 | orchestrator | 2026-04-05 00:47:17.980673 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 00:47:17.980684 | orchestrator | Sunday 05 April 2026 00:47:12 +0000 (0:00:00.169) 0:00:58.154 ********** 2026-04-05 00:47:17.980695 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3bb92c70-c222-5380-a7bf-d21f250fcd2a'}}) 2026-04-05 00:47:17.980707 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '377d1900-3c05-5c55-820b-3d4ba19b512c'}}) 2026-04-05 00:47:17.980717 | orchestrator | 2026-04-05 00:47:17.980728 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 00:47:17.980739 | orchestrator | Sunday 05 April 2026 00:47:13 +0000 (0:00:00.416) 0:00:58.570 ********** 2026-04-05 00:47:17.980751 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'}) 2026-04-05 00:47:17.980764 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'}) 2026-04-05 00:47:17.980775 | orchestrator | 2026-04-05 00:47:17.980786 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 00:47:17.980817 | orchestrator | Sunday 05 April 2026 00:47:15 +0000 (0:00:01.997) 0:01:00.568 ********** 2026-04-05 00:47:17.980829 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:17.980841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:17.980852 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.980863 | orchestrator | 2026-04-05 00:47:17.980874 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 00:47:17.980884 | orchestrator | Sunday 05 April 2026 00:47:15 +0000 (0:00:00.152) 0:01:00.720 ********** 2026-04-05 00:47:17.980895 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'}) 2026-04-05 00:47:17.980907 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'}) 2026-04-05 00:47:17.980918 | orchestrator | 2026-04-05 00:47:17.980929 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 00:47:17.980940 | orchestrator | Sunday 05 April 2026 00:47:16 +0000 (0:00:01.335) 0:01:02.056 ********** 2026-04-05 00:47:17.980950 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:17.980970 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:17.980982 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.980992 | orchestrator | 2026-04-05 00:47:17.981003 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 00:47:17.981014 | orchestrator | Sunday 05 April 2026 00:47:16 +0000 (0:00:00.146) 0:01:02.202 ********** 2026-04-05 00:47:17.981025 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.981036 | orchestrator | 2026-04-05 00:47:17.981046 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 00:47:17.981057 | orchestrator | Sunday 05 April 2026 00:47:16 +0000 (0:00:00.130) 0:01:02.333 ********** 2026-04-05 00:47:17.981068 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:17.981079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:17.981090 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.981101 | orchestrator | 2026-04-05 00:47:17.981112 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 00:47:17.981123 | orchestrator | Sunday 05 April 2026 00:47:17 +0000 (0:00:00.154) 0:01:02.487 ********** 2026-04-05 00:47:17.981134 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.981144 | orchestrator | 2026-04-05 00:47:17.981155 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 00:47:17.981176 | orchestrator | Sunday 05 April 2026 00:47:17 +0000 (0:00:00.143) 0:01:02.631 ********** 2026-04-05 00:47:17.981188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:17.981199 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:17.981209 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.981220 | orchestrator | 2026-04-05 00:47:17.981231 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 00:47:17.981242 | orchestrator | Sunday 05 April 2026 00:47:17 +0000 (0:00:00.153) 0:01:02.784 ********** 2026-04-05 00:47:17.981252 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.981263 | orchestrator | 2026-04-05 00:47:17.981274 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 00:47:17.981285 | orchestrator | Sunday 05 April 2026 00:47:17 +0000 (0:00:00.150) 0:01:02.935 ********** 2026-04-05 00:47:17.981296 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:17.981306 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:17.981317 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:17.981328 | orchestrator | 2026-04-05 00:47:17.981339 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 00:47:17.981349 | orchestrator | Sunday 05 April 2026 00:47:17 +0000 (0:00:00.162) 0:01:03.097 ********** 2026-04-05 00:47:17.981360 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:17.981371 | orchestrator | 2026-04-05 00:47:17.981382 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 00:47:17.981393 | orchestrator | Sunday 05 April 2026 00:47:17 +0000 (0:00:00.143) 0:01:03.241 ********** 2026-04-05 00:47:17.981410 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:24.393313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:24.393415 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.393431 | orchestrator | 2026-04-05 00:47:24.393441 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 00:47:24.393454 | orchestrator | Sunday 05 April 2026 00:47:18 +0000 (0:00:00.359) 0:01:03.600 ********** 2026-04-05 00:47:24.393464 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:24.393474 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:24.393483 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.393492 | orchestrator | 2026-04-05 00:47:24.393535 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 00:47:24.393546 | orchestrator | Sunday 05 April 2026 00:47:18 +0000 (0:00:00.160) 0:01:03.760 ********** 2026-04-05 00:47:24.393555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:24.393564 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:24.393573 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.393583 | orchestrator | 2026-04-05 00:47:24.393592 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 00:47:24.393601 | orchestrator | Sunday 05 April 2026 00:47:18 +0000 (0:00:00.172) 0:01:03.933 ********** 2026-04-05 00:47:24.393610 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.393618 | orchestrator | 2026-04-05 00:47:24.393627 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 00:47:24.393637 | orchestrator | Sunday 05 April 2026 00:47:18 +0000 (0:00:00.145) 0:01:04.078 ********** 2026-04-05 00:47:24.393646 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.393655 | orchestrator | 2026-04-05 00:47:24.393664 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 00:47:24.393674 | orchestrator | Sunday 05 April 2026 00:47:18 +0000 (0:00:00.141) 0:01:04.219 ********** 2026-04-05 00:47:24.393683 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.393692 | orchestrator | 2026-04-05 00:47:24.393698 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 00:47:24.393703 | orchestrator | Sunday 05 April 2026 00:47:19 +0000 (0:00:00.130) 0:01:04.350 ********** 2026-04-05 00:47:24.393709 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:47:24.393715 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 00:47:24.393721 | orchestrator | } 2026-04-05 00:47:24.393726 | orchestrator | 2026-04-05 00:47:24.393732 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 00:47:24.393737 | orchestrator | Sunday 05 April 2026 00:47:19 +0000 (0:00:00.144) 0:01:04.495 ********** 2026-04-05 00:47:24.393743 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:47:24.393748 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 00:47:24.393754 | orchestrator | } 2026-04-05 00:47:24.393759 | orchestrator | 2026-04-05 00:47:24.393765 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 00:47:24.393770 | orchestrator | Sunday 05 April 2026 00:47:19 +0000 (0:00:00.145) 0:01:04.640 ********** 2026-04-05 00:47:24.393775 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:47:24.393781 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 00:47:24.393787 | orchestrator | } 2026-04-05 00:47:24.393792 | orchestrator | 2026-04-05 00:47:24.393798 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 00:47:24.393803 | orchestrator | Sunday 05 April 2026 00:47:19 +0000 (0:00:00.148) 0:01:04.789 ********** 2026-04-05 00:47:24.393830 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:24.393836 | orchestrator | 2026-04-05 00:47:24.393841 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 00:47:24.393846 | orchestrator | Sunday 05 April 2026 00:47:19 +0000 (0:00:00.522) 0:01:05.311 ********** 2026-04-05 00:47:24.393852 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:24.393857 | orchestrator | 2026-04-05 00:47:24.393863 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 00:47:24.393868 | orchestrator | Sunday 05 April 2026 00:47:20 +0000 (0:00:00.496) 0:01:05.808 ********** 2026-04-05 00:47:24.393875 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:24.393881 | orchestrator | 2026-04-05 00:47:24.393887 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 00:47:24.393894 | orchestrator | Sunday 05 April 2026 00:47:20 +0000 (0:00:00.513) 0:01:06.321 ********** 2026-04-05 00:47:24.393900 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:24.393906 | orchestrator | 2026-04-05 00:47:24.393912 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 00:47:24.393919 | orchestrator | Sunday 05 April 2026 00:47:21 +0000 (0:00:00.349) 0:01:06.671 ********** 2026-04-05 00:47:24.393925 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.393931 | orchestrator | 2026-04-05 00:47:24.393937 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 00:47:24.393943 | orchestrator | Sunday 05 April 2026 00:47:21 +0000 (0:00:00.104) 0:01:06.776 ********** 2026-04-05 00:47:24.393949 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.393955 | orchestrator | 2026-04-05 00:47:24.393962 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 00:47:24.393968 | orchestrator | Sunday 05 April 2026 00:47:21 +0000 (0:00:00.111) 0:01:06.887 ********** 2026-04-05 00:47:24.393974 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:47:24.393980 | orchestrator |  "vgs_report": { 2026-04-05 00:47:24.393987 | orchestrator |  "vg": [] 2026-04-05 00:47:24.394007 | orchestrator |  } 2026-04-05 00:47:24.394059 | orchestrator | } 2026-04-05 00:47:24.394068 | orchestrator | 2026-04-05 00:47:24.394074 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 00:47:24.394081 | orchestrator | Sunday 05 April 2026 00:47:21 +0000 (0:00:00.136) 0:01:07.024 ********** 2026-04-05 00:47:24.394087 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394094 | orchestrator | 2026-04-05 00:47:24.394100 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 00:47:24.394106 | orchestrator | Sunday 05 April 2026 00:47:21 +0000 (0:00:00.149) 0:01:07.174 ********** 2026-04-05 00:47:24.394112 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394117 | orchestrator | 2026-04-05 00:47:24.394122 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 00:47:24.394128 | orchestrator | Sunday 05 April 2026 00:47:21 +0000 (0:00:00.141) 0:01:07.316 ********** 2026-04-05 00:47:24.394133 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394139 | orchestrator | 2026-04-05 00:47:24.394144 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 00:47:24.394155 | orchestrator | Sunday 05 April 2026 00:47:22 +0000 (0:00:00.139) 0:01:07.455 ********** 2026-04-05 00:47:24.394160 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394166 | orchestrator | 2026-04-05 00:47:24.394171 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 00:47:24.394177 | orchestrator | Sunday 05 April 2026 00:47:22 +0000 (0:00:00.153) 0:01:07.609 ********** 2026-04-05 00:47:24.394182 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394187 | orchestrator | 2026-04-05 00:47:24.394193 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 00:47:24.394198 | orchestrator | Sunday 05 April 2026 00:47:22 +0000 (0:00:00.157) 0:01:07.766 ********** 2026-04-05 00:47:24.394204 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394215 | orchestrator | 2026-04-05 00:47:24.394220 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 00:47:24.394226 | orchestrator | Sunday 05 April 2026 00:47:22 +0000 (0:00:00.138) 0:01:07.905 ********** 2026-04-05 00:47:24.394231 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394236 | orchestrator | 2026-04-05 00:47:24.394242 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 00:47:24.394247 | orchestrator | Sunday 05 April 2026 00:47:22 +0000 (0:00:00.128) 0:01:08.033 ********** 2026-04-05 00:47:24.394252 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394258 | orchestrator | 2026-04-05 00:47:24.394263 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 00:47:24.394269 | orchestrator | Sunday 05 April 2026 00:47:22 +0000 (0:00:00.164) 0:01:08.198 ********** 2026-04-05 00:47:24.394274 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394279 | orchestrator | 2026-04-05 00:47:24.394285 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 00:47:24.394290 | orchestrator | Sunday 05 April 2026 00:47:23 +0000 (0:00:00.372) 0:01:08.571 ********** 2026-04-05 00:47:24.394296 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394302 | orchestrator | 2026-04-05 00:47:24.394311 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 00:47:24.394320 | orchestrator | Sunday 05 April 2026 00:47:23 +0000 (0:00:00.147) 0:01:08.719 ********** 2026-04-05 00:47:24.394329 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394337 | orchestrator | 2026-04-05 00:47:24.394347 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 00:47:24.394357 | orchestrator | Sunday 05 April 2026 00:47:23 +0000 (0:00:00.157) 0:01:08.876 ********** 2026-04-05 00:47:24.394367 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394377 | orchestrator | 2026-04-05 00:47:24.394387 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 00:47:24.394397 | orchestrator | Sunday 05 April 2026 00:47:23 +0000 (0:00:00.151) 0:01:09.028 ********** 2026-04-05 00:47:24.394407 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394416 | orchestrator | 2026-04-05 00:47:24.394426 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 00:47:24.394436 | orchestrator | Sunday 05 April 2026 00:47:23 +0000 (0:00:00.140) 0:01:09.169 ********** 2026-04-05 00:47:24.394445 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394454 | orchestrator | 2026-04-05 00:47:24.394463 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 00:47:24.394473 | orchestrator | Sunday 05 April 2026 00:47:23 +0000 (0:00:00.143) 0:01:09.312 ********** 2026-04-05 00:47:24.394482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:24.394492 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:24.394520 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394529 | orchestrator | 2026-04-05 00:47:24.394537 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 00:47:24.394546 | orchestrator | Sunday 05 April 2026 00:47:24 +0000 (0:00:00.163) 0:01:09.476 ********** 2026-04-05 00:47:24.394555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:24.394563 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:24.394572 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:24.394580 | orchestrator | 2026-04-05 00:47:24.394589 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 00:47:24.394609 | orchestrator | Sunday 05 April 2026 00:47:24 +0000 (0:00:00.163) 0:01:09.639 ********** 2026-04-05 00:47:24.394629 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:27.361881 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:27.361931 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:27.361937 | orchestrator | 2026-04-05 00:47:27.361942 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 00:47:27.361947 | orchestrator | Sunday 05 April 2026 00:47:24 +0000 (0:00:00.177) 0:01:09.817 ********** 2026-04-05 00:47:27.361951 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:27.361963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:27.361967 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:27.361971 | orchestrator | 2026-04-05 00:47:27.361976 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 00:47:27.361980 | orchestrator | Sunday 05 April 2026 00:47:24 +0000 (0:00:00.149) 0:01:09.966 ********** 2026-04-05 00:47:27.361984 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:27.361988 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:27.361992 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:27.361997 | orchestrator | 2026-04-05 00:47:27.362001 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 00:47:27.362005 | orchestrator | Sunday 05 April 2026 00:47:24 +0000 (0:00:00.167) 0:01:10.134 ********** 2026-04-05 00:47:27.362009 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:27.362039 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:27.362044 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:27.362048 | orchestrator | 2026-04-05 00:47:27.362052 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 00:47:27.362056 | orchestrator | Sunday 05 April 2026 00:47:24 +0000 (0:00:00.153) 0:01:10.288 ********** 2026-04-05 00:47:27.362061 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:27.362065 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:27.362069 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:27.362073 | orchestrator | 2026-04-05 00:47:27.362077 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 00:47:27.362081 | orchestrator | Sunday 05 April 2026 00:47:25 +0000 (0:00:00.325) 0:01:10.613 ********** 2026-04-05 00:47:27.362085 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:27.362090 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:27.362094 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:27.362108 | orchestrator | 2026-04-05 00:47:27.362112 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 00:47:27.362117 | orchestrator | Sunday 05 April 2026 00:47:25 +0000 (0:00:00.153) 0:01:10.766 ********** 2026-04-05 00:47:27.362121 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:27.362125 | orchestrator | 2026-04-05 00:47:27.362129 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 00:47:27.362133 | orchestrator | Sunday 05 April 2026 00:47:25 +0000 (0:00:00.503) 0:01:11.270 ********** 2026-04-05 00:47:27.362137 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:27.362142 | orchestrator | 2026-04-05 00:47:27.362146 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 00:47:27.362150 | orchestrator | Sunday 05 April 2026 00:47:26 +0000 (0:00:00.527) 0:01:11.798 ********** 2026-04-05 00:47:27.362154 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:27.362158 | orchestrator | 2026-04-05 00:47:27.362162 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 00:47:27.362166 | orchestrator | Sunday 05 April 2026 00:47:26 +0000 (0:00:00.140) 0:01:11.938 ********** 2026-04-05 00:47:27.362171 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'vg_name': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'}) 2026-04-05 00:47:27.362175 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'vg_name': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'}) 2026-04-05 00:47:27.362179 | orchestrator | 2026-04-05 00:47:27.362183 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 00:47:27.362188 | orchestrator | Sunday 05 April 2026 00:47:26 +0000 (0:00:00.162) 0:01:12.101 ********** 2026-04-05 00:47:27.362200 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:27.362205 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:27.362209 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:27.362213 | orchestrator | 2026-04-05 00:47:27.362224 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 00:47:27.362228 | orchestrator | Sunday 05 April 2026 00:47:26 +0000 (0:00:00.161) 0:01:12.262 ********** 2026-04-05 00:47:27.362237 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:27.362241 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:27.362245 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:27.362250 | orchestrator | 2026-04-05 00:47:27.362254 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 00:47:27.362258 | orchestrator | Sunday 05 April 2026 00:47:27 +0000 (0:00:00.151) 0:01:12.413 ********** 2026-04-05 00:47:27.362262 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'})  2026-04-05 00:47:27.362266 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'})  2026-04-05 00:47:27.362270 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:27.362275 | orchestrator | 2026-04-05 00:47:27.362279 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 00:47:27.362283 | orchestrator | Sunday 05 April 2026 00:47:27 +0000 (0:00:00.136) 0:01:12.550 ********** 2026-04-05 00:47:27.362287 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:47:27.362291 | orchestrator |  "lvm_report": { 2026-04-05 00:47:27.362295 | orchestrator |  "lv": [ 2026-04-05 00:47:27.362303 | orchestrator |  { 2026-04-05 00:47:27.362307 | orchestrator |  "lv_name": "osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c", 2026-04-05 00:47:27.362312 | orchestrator |  "vg_name": "ceph-377d1900-3c05-5c55-820b-3d4ba19b512c" 2026-04-05 00:47:27.362316 | orchestrator |  }, 2026-04-05 00:47:27.362320 | orchestrator |  { 2026-04-05 00:47:27.362324 | orchestrator |  "lv_name": "osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a", 2026-04-05 00:47:27.362328 | orchestrator |  "vg_name": "ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a" 2026-04-05 00:47:27.362332 | orchestrator |  } 2026-04-05 00:47:27.362336 | orchestrator |  ], 2026-04-05 00:47:27.362341 | orchestrator |  "pv": [ 2026-04-05 00:47:27.362345 | orchestrator |  { 2026-04-05 00:47:27.362349 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 00:47:27.362353 | orchestrator |  "vg_name": "ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a" 2026-04-05 00:47:27.362357 | orchestrator |  }, 2026-04-05 00:47:27.362361 | orchestrator |  { 2026-04-05 00:47:27.362365 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 00:47:27.362369 | orchestrator |  "vg_name": "ceph-377d1900-3c05-5c55-820b-3d4ba19b512c" 2026-04-05 00:47:27.362373 | orchestrator |  } 2026-04-05 00:47:27.362377 | orchestrator |  ] 2026-04-05 00:47:27.362381 | orchestrator |  } 2026-04-05 00:47:27.362386 | orchestrator | } 2026-04-05 00:47:27.362390 | orchestrator | 2026-04-05 00:47:27.362394 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:47:27.362398 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 00:47:27.362402 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 00:47:27.362406 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 00:47:27.362411 | orchestrator | 2026-04-05 00:47:27.362415 | orchestrator | 2026-04-05 00:47:27.362419 | orchestrator | 2026-04-05 00:47:27.362427 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:47:27.362431 | orchestrator | Sunday 05 April 2026 00:47:27 +0000 (0:00:00.137) 0:01:12.688 ********** 2026-04-05 00:47:27.362435 | orchestrator | =============================================================================== 2026-04-05 00:47:27.362439 | orchestrator | Create block VGs -------------------------------------------------------- 5.95s 2026-04-05 00:47:27.362443 | orchestrator | Create block LVs -------------------------------------------------------- 4.17s 2026-04-05 00:47:27.362447 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2026-04-05 00:47:27.362451 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2026-04-05 00:47:27.362455 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2026-04-05 00:47:27.362459 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2026-04-05 00:47:27.362464 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.51s 2026-04-05 00:47:27.362468 | orchestrator | Add known partitions to the list of available block devices ------------- 1.49s 2026-04-05 00:47:27.362475 | orchestrator | Add known links to the list of available block devices ------------------ 1.28s 2026-04-05 00:47:27.649544 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2026-04-05 00:47:27.649615 | orchestrator | Print LVM report data --------------------------------------------------- 0.86s 2026-04-05 00:47:27.649628 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.86s 2026-04-05 00:47:27.649637 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2026-04-05 00:47:27.649646 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-04-05 00:47:27.649675 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2026-04-05 00:47:27.649684 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-04-05 00:47:27.649704 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2026-04-05 00:47:27.649713 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-04-05 00:47:27.649721 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.72s 2026-04-05 00:47:27.649730 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.68s 2026-04-05 00:47:39.075128 | orchestrator | 2026-04-05 00:47:39 | INFO  | Prepare task for execution of facts. 2026-04-05 00:47:39.149021 | orchestrator | 2026-04-05 00:47:39 | INFO  | Task b6f31415-1c9e-44cd-83a3-84ca27f5f904 (facts) was prepared for execution. 2026-04-05 00:47:39.149122 | orchestrator | 2026-04-05 00:47:39 | INFO  | It takes a moment until task b6f31415-1c9e-44cd-83a3-84ca27f5f904 (facts) has been started and output is visible here. 2026-04-05 00:47:51.187136 | orchestrator | 2026-04-05 00:47:51.187226 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 00:47:51.187241 | orchestrator | 2026-04-05 00:47:51.187253 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 00:47:51.187265 | orchestrator | Sunday 05 April 2026 00:47:42 +0000 (0:00:00.374) 0:00:00.374 ********** 2026-04-05 00:47:51.187276 | orchestrator | ok: [testbed-manager] 2026-04-05 00:47:51.187289 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:47:51.187296 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:47:51.187302 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:47:51.187309 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:47:51.187315 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:47:51.187321 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:51.187327 | orchestrator | 2026-04-05 00:47:51.187334 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 00:47:51.187340 | orchestrator | Sunday 05 April 2026 00:47:44 +0000 (0:00:01.466) 0:00:01.840 ********** 2026-04-05 00:47:51.187346 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:47:51.187354 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:47:51.187360 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:47:51.187366 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:47:51.187372 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:47:51.187378 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:51.187384 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:51.187390 | orchestrator | 2026-04-05 00:47:51.187397 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:47:51.187403 | orchestrator | 2026-04-05 00:47:51.187409 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:47:51.187415 | orchestrator | Sunday 05 April 2026 00:47:45 +0000 (0:00:01.234) 0:00:03.074 ********** 2026-04-05 00:47:51.187421 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:47:51.187427 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:47:51.187433 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:47:51.187439 | orchestrator | ok: [testbed-manager] 2026-04-05 00:47:51.187445 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:47:51.187451 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:47:51.187457 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:51.187463 | orchestrator | 2026-04-05 00:47:51.187470 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 00:47:51.187476 | orchestrator | 2026-04-05 00:47:51.187482 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 00:47:51.187488 | orchestrator | Sunday 05 April 2026 00:47:50 +0000 (0:00:04.862) 0:00:07.937 ********** 2026-04-05 00:47:51.187494 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:47:51.187501 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:47:51.187571 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:47:51.187577 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:47:51.187592 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:47:51.187598 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:51.187612 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:51.187618 | orchestrator | 2026-04-05 00:47:51.187624 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:47:51.187631 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:51.187639 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:51.187645 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:51.187651 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:51.187657 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:51.187664 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:51.187670 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:51.187676 | orchestrator | 2026-04-05 00:47:51.187682 | orchestrator | 2026-04-05 00:47:51.187692 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:47:51.187703 | orchestrator | Sunday 05 April 2026 00:47:50 +0000 (0:00:00.542) 0:00:08.480 ********** 2026-04-05 00:47:51.187713 | orchestrator | =============================================================================== 2026-04-05 00:47:51.187724 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.86s 2026-04-05 00:47:51.187734 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.47s 2026-04-05 00:47:51.187757 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2026-04-05 00:47:51.187769 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-04-05 00:48:02.782951 | orchestrator | 2026-04-05 00:48:02 | INFO  | Prepare task for execution of frr. 2026-04-05 00:48:02.874884 | orchestrator | 2026-04-05 00:48:02 | INFO  | Task 1579cdaa-f73f-4eda-bcc3-b79eb949b5dc (frr) was prepared for execution. 2026-04-05 00:48:02.874975 | orchestrator | 2026-04-05 00:48:02 | INFO  | It takes a moment until task 1579cdaa-f73f-4eda-bcc3-b79eb949b5dc (frr) has been started and output is visible here. 2026-04-05 00:48:29.500069 | orchestrator | 2026-04-05 00:48:29.500215 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-05 00:48:29.500243 | orchestrator | 2026-04-05 00:48:29.500263 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-05 00:48:29.500284 | orchestrator | Sunday 05 April 2026 00:48:06 +0000 (0:00:00.318) 0:00:00.318 ********** 2026-04-05 00:48:29.500305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:48:29.500327 | orchestrator | 2026-04-05 00:48:29.500348 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-05 00:48:29.500368 | orchestrator | Sunday 05 April 2026 00:48:06 +0000 (0:00:00.309) 0:00:00.628 ********** 2026-04-05 00:48:29.500388 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:29.500410 | orchestrator | 2026-04-05 00:48:29.500425 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-05 00:48:29.500478 | orchestrator | Sunday 05 April 2026 00:48:08 +0000 (0:00:01.633) 0:00:02.262 ********** 2026-04-05 00:48:29.500497 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:29.500613 | orchestrator | 2026-04-05 00:48:29.500635 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-05 00:48:29.500652 | orchestrator | Sunday 05 April 2026 00:48:18 +0000 (0:00:10.321) 0:00:12.583 ********** 2026-04-05 00:48:29.500670 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:29.500688 | orchestrator | 2026-04-05 00:48:29.500707 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-05 00:48:29.500725 | orchestrator | Sunday 05 April 2026 00:48:19 +0000 (0:00:01.050) 0:00:13.634 ********** 2026-04-05 00:48:29.500743 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:29.500760 | orchestrator | 2026-04-05 00:48:29.500777 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-05 00:48:29.500795 | orchestrator | Sunday 05 April 2026 00:48:20 +0000 (0:00:00.995) 0:00:14.629 ********** 2026-04-05 00:48:29.500813 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:29.500830 | orchestrator | 2026-04-05 00:48:29.500849 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-05 00:48:29.500868 | orchestrator | Sunday 05 April 2026 00:48:22 +0000 (0:00:01.262) 0:00:15.891 ********** 2026-04-05 00:48:29.500886 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:48:29.500904 | orchestrator | 2026-04-05 00:48:29.500922 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-05 00:48:29.500940 | orchestrator | Sunday 05 April 2026 00:48:22 +0000 (0:00:00.159) 0:00:16.051 ********** 2026-04-05 00:48:29.500958 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:48:29.500977 | orchestrator | 2026-04-05 00:48:29.500996 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-05 00:48:29.501014 | orchestrator | Sunday 05 April 2026 00:48:22 +0000 (0:00:00.307) 0:00:16.358 ********** 2026-04-05 00:48:29.501032 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:48:29.501043 | orchestrator | 2026-04-05 00:48:29.501055 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-05 00:48:29.501075 | orchestrator | Sunday 05 April 2026 00:48:22 +0000 (0:00:00.188) 0:00:16.546 ********** 2026-04-05 00:48:29.501088 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:48:29.501105 | orchestrator | 2026-04-05 00:48:29.501124 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-05 00:48:29.501141 | orchestrator | Sunday 05 April 2026 00:48:22 +0000 (0:00:00.143) 0:00:16.689 ********** 2026-04-05 00:48:29.501158 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:48:29.501175 | orchestrator | 2026-04-05 00:48:29.501194 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-05 00:48:29.501212 | orchestrator | Sunday 05 April 2026 00:48:23 +0000 (0:00:00.164) 0:00:16.854 ********** 2026-04-05 00:48:29.501230 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:29.501247 | orchestrator | 2026-04-05 00:48:29.501263 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-05 00:48:29.501280 | orchestrator | Sunday 05 April 2026 00:48:24 +0000 (0:00:01.020) 0:00:17.875 ********** 2026-04-05 00:48:29.501299 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-05 00:48:29.501317 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-05 00:48:29.501338 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-05 00:48:29.501356 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-05 00:48:29.501374 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-05 00:48:29.501394 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-05 00:48:29.501436 | orchestrator | 2026-04-05 00:48:29.501448 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-05 00:48:29.501478 | orchestrator | Sunday 05 April 2026 00:48:26 +0000 (0:00:02.279) 0:00:20.155 ********** 2026-04-05 00:48:29.501489 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:29.501500 | orchestrator | 2026-04-05 00:48:29.501545 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-05 00:48:29.501556 | orchestrator | Sunday 05 April 2026 00:48:27 +0000 (0:00:01.276) 0:00:21.431 ********** 2026-04-05 00:48:29.501567 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:29.501578 | orchestrator | 2026-04-05 00:48:29.501588 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:48:29.501600 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 00:48:29.501611 | orchestrator | 2026-04-05 00:48:29.501622 | orchestrator | 2026-04-05 00:48:29.501658 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:48:29.501670 | orchestrator | Sunday 05 April 2026 00:48:29 +0000 (0:00:01.472) 0:00:22.903 ********** 2026-04-05 00:48:29.501680 | orchestrator | =============================================================================== 2026-04-05 00:48:29.501691 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.32s 2026-04-05 00:48:29.501702 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.28s 2026-04-05 00:48:29.501713 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.63s 2026-04-05 00:48:29.501723 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.47s 2026-04-05 00:48:29.501734 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.28s 2026-04-05 00:48:29.501744 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.26s 2026-04-05 00:48:29.501755 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.05s 2026-04-05 00:48:29.501766 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.02s 2026-04-05 00:48:29.501776 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.00s 2026-04-05 00:48:29.501787 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.31s 2026-04-05 00:48:29.501798 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.31s 2026-04-05 00:48:29.501809 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.19s 2026-04-05 00:48:29.501819 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-04-05 00:48:29.501831 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-04-05 00:48:29.501841 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-04-05 00:48:29.751899 | orchestrator | 2026-04-05 00:48:29.755103 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Apr 5 00:48:29 UTC 2026 2026-04-05 00:48:29.755169 | orchestrator | 2026-04-05 00:48:30.938114 | orchestrator | 2026-04-05 00:48:30 | INFO  | Collection nutshell is prepared for execution 2026-04-05 00:48:31.074563 | orchestrator | 2026-04-05 00:48:31 | INFO  | A [0] - dotfiles 2026-04-05 00:48:41.135558 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [0] - homer 2026-04-05 00:48:41.135658 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [0] - netdata 2026-04-05 00:48:41.135676 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [0] - openstackclient 2026-04-05 00:48:41.135688 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [0] - phpmyadmin 2026-04-05 00:48:41.135700 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [0] - common 2026-04-05 00:48:41.138757 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [1] -- loadbalancer 2026-04-05 00:48:41.138832 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [2] --- opensearch 2026-04-05 00:48:41.138872 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [2] --- mariadb-ng 2026-04-05 00:48:41.138881 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [3] ---- horizon 2026-04-05 00:48:41.138889 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [3] ---- keystone 2026-04-05 00:48:41.138896 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [4] ----- neutron 2026-04-05 00:48:41.138903 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [5] ------ wait-for-nova 2026-04-05 00:48:41.138911 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [6] ------- octavia 2026-04-05 00:48:41.140052 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [4] ----- barbican 2026-04-05 00:48:41.140135 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [4] ----- designate 2026-04-05 00:48:41.140152 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [4] ----- ironic 2026-04-05 00:48:41.140160 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [4] ----- placement 2026-04-05 00:48:41.140167 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [4] ----- magnum 2026-04-05 00:48:41.141664 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [1] -- openvswitch 2026-04-05 00:48:41.141710 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [2] --- ovn 2026-04-05 00:48:41.142177 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [1] -- memcached 2026-04-05 00:48:41.142205 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [1] -- redis 2026-04-05 00:48:41.142212 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [1] -- rabbitmq-ng 2026-04-05 00:48:41.142490 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [0] - kubernetes 2026-04-05 00:48:41.145462 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [1] -- kubeconfig 2026-04-05 00:48:41.145690 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [1] -- copy-kubeconfig 2026-04-05 00:48:41.145802 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [0] - ceph 2026-04-05 00:48:41.148912 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [1] -- ceph-pools 2026-04-05 00:48:41.148955 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [2] --- copy-ceph-keys 2026-04-05 00:48:41.148964 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [3] ---- cephclient 2026-04-05 00:48:41.149030 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-05 00:48:41.149047 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [4] ----- wait-for-keystone 2026-04-05 00:48:41.149055 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-05 00:48:41.149553 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [5] ------ glance 2026-04-05 00:48:41.149585 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [5] ------ cinder 2026-04-05 00:48:41.149593 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [5] ------ nova 2026-04-05 00:48:41.149699 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [4] ----- prometheus 2026-04-05 00:48:41.149709 | orchestrator | 2026-04-05 00:48:41 | INFO  | A [5] ------ grafana 2026-04-05 00:48:41.421907 | orchestrator | 2026-04-05 00:48:41 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-05 00:48:41.421989 | orchestrator | 2026-04-05 00:48:41 | INFO  | Tasks are running in the background 2026-04-05 00:48:43.977154 | orchestrator | 2026-04-05 00:48:43 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-05 00:48:46.237639 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:48:46.239478 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:48:46.244699 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task 50ba8e15-6b42-4331-a9e5-4dcf34d14bb0 is in state STARTED 2026-04-05 00:48:46.245412 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:48:46.245929 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:48:46.247308 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:48:46.253371 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:48:46.253398 | orchestrator | 2026-04-05 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:49.366471 | orchestrator | 2026-04-05 00:48:49 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:48:49.369004 | orchestrator | 2026-04-05 00:48:49 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:48:49.372791 | orchestrator | 2026-04-05 00:48:49 | INFO  | Task 50ba8e15-6b42-4331-a9e5-4dcf34d14bb0 is in state STARTED 2026-04-05 00:48:49.375200 | orchestrator | 2026-04-05 00:48:49 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:48:49.377250 | orchestrator | 2026-04-05 00:48:49 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:48:49.378116 | orchestrator | 2026-04-05 00:48:49 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:48:49.378989 | orchestrator | 2026-04-05 00:48:49 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:48:49.379030 | orchestrator | 2026-04-05 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:52.447030 | orchestrator | 2026-04-05 00:48:52 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:48:52.447625 | orchestrator | 2026-04-05 00:48:52 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:48:52.448260 | orchestrator | 2026-04-05 00:48:52 | INFO  | Task 50ba8e15-6b42-4331-a9e5-4dcf34d14bb0 is in state STARTED 2026-04-05 00:48:52.449249 | orchestrator | 2026-04-05 00:48:52 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:48:52.449799 | orchestrator | 2026-04-05 00:48:52 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:48:52.450660 | orchestrator | 2026-04-05 00:48:52 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:48:52.451409 | orchestrator | 2026-04-05 00:48:52 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:48:52.451440 | orchestrator | 2026-04-05 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:55.499997 | orchestrator | 2026-04-05 00:48:55 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:48:55.505624 | orchestrator | 2026-04-05 00:48:55 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:48:55.506393 | orchestrator | 2026-04-05 00:48:55 | INFO  | Task 50ba8e15-6b42-4331-a9e5-4dcf34d14bb0 is in state STARTED 2026-04-05 00:48:55.511944 | orchestrator | 2026-04-05 00:48:55 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:48:55.512039 | orchestrator | 2026-04-05 00:48:55 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:48:55.512052 | orchestrator | 2026-04-05 00:48:55 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:48:55.512165 | orchestrator | 2026-04-05 00:48:55 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:48:55.512179 | orchestrator | 2026-04-05 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:58.612090 | orchestrator | 2026-04-05 00:48:58 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:48:58.613639 | orchestrator | 2026-04-05 00:48:58 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:48:58.616940 | orchestrator | 2026-04-05 00:48:58 | INFO  | Task 50ba8e15-6b42-4331-a9e5-4dcf34d14bb0 is in state STARTED 2026-04-05 00:48:58.619765 | orchestrator | 2026-04-05 00:48:58 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:48:58.620229 | orchestrator | 2026-04-05 00:48:58 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:48:58.626089 | orchestrator | 2026-04-05 00:48:58 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:48:58.629266 | orchestrator | 2026-04-05 00:48:58 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:48:58.629331 | orchestrator | 2026-04-05 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:01.828696 | orchestrator | 2026-04-05 00:49:01 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:01.830248 | orchestrator | 2026-04-05 00:49:01 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:01.836223 | orchestrator | 2026-04-05 00:49:01 | INFO  | Task 50ba8e15-6b42-4331-a9e5-4dcf34d14bb0 is in state STARTED 2026-04-05 00:49:01.836868 | orchestrator | 2026-04-05 00:49:01 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:01.837720 | orchestrator | 2026-04-05 00:49:01 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:01.839946 | orchestrator | 2026-04-05 00:49:01 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:01.841025 | orchestrator | 2026-04-05 00:49:01 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:01.841078 | orchestrator | 2026-04-05 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:04.904405 | orchestrator | 2026-04-05 00:49:04 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:04.907860 | orchestrator | 2026-04-05 00:49:04 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:04.909432 | orchestrator | 2026-04-05 00:49:04 | INFO  | Task 50ba8e15-6b42-4331-a9e5-4dcf34d14bb0 is in state STARTED 2026-04-05 00:49:04.911431 | orchestrator | 2026-04-05 00:49:04 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:04.913434 | orchestrator | 2026-04-05 00:49:04 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:04.914578 | orchestrator | 2026-04-05 00:49:04 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:04.915626 | orchestrator | 2026-04-05 00:49:04 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:04.915681 | orchestrator | 2026-04-05 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:08.108285 | orchestrator | 2026-04-05 00:49:07 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:08.108342 | orchestrator | 2026-04-05 00:49:07 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:08.108360 | orchestrator | 2026-04-05 00:49:07 | INFO  | Task 50ba8e15-6b42-4331-a9e5-4dcf34d14bb0 is in state STARTED 2026-04-05 00:49:08.108365 | orchestrator | 2026-04-05 00:49:07 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:08.108369 | orchestrator | 2026-04-05 00:49:07 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:08.108373 | orchestrator | 2026-04-05 00:49:08 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:08.108377 | orchestrator | 2026-04-05 00:49:08 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:08.108381 | orchestrator | 2026-04-05 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:11.137445 | orchestrator | 2026-04-05 00:49:11 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:11.137595 | orchestrator | 2026-04-05 00:49:11 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:11.139436 | orchestrator | 2026-04-05 00:49:11 | INFO  | Task 50ba8e15-6b42-4331-a9e5-4dcf34d14bb0 is in state STARTED 2026-04-05 00:49:11.167055 | orchestrator | 2026-04-05 00:49:11 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:11.167147 | orchestrator | 2026-04-05 00:49:11 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:11.167163 | orchestrator | 2026-04-05 00:49:11 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:11.167174 | orchestrator | 2026-04-05 00:49:11 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:11.167186 | orchestrator | 2026-04-05 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:14.587038 | orchestrator | 2026-04-05 00:49:14 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:14.603777 | orchestrator | 2026-04-05 00:49:14 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:14.609987 | orchestrator | 2026-04-05 00:49:14.610218 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-05 00:49:14.610232 | orchestrator | 2026-04-05 00:49:14.610240 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-05 00:49:14.610247 | orchestrator | Sunday 05 April 2026 00:48:54 +0000 (0:00:00.783) 0:00:00.783 ********** 2026-04-05 00:49:14.610254 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:49:14.610261 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:14.610267 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:49:14.610274 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:14.610281 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:14.610287 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:49:14.610293 | orchestrator | changed: [testbed-manager] 2026-04-05 00:49:14.610299 | orchestrator | 2026-04-05 00:49:14.610306 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-05 00:49:14.610312 | orchestrator | Sunday 05 April 2026 00:49:01 +0000 (0:00:07.329) 0:00:08.112 ********** 2026-04-05 00:49:14.610319 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-05 00:49:14.610326 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-05 00:49:14.610333 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-05 00:49:14.610339 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-05 00:49:14.610345 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-05 00:49:14.610351 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-05 00:49:14.610357 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-05 00:49:14.610364 | orchestrator | 2026-04-05 00:49:14.610370 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-05 00:49:14.610402 | orchestrator | Sunday 05 April 2026 00:49:05 +0000 (0:00:03.326) 0:00:11.439 ********** 2026-04-05 00:49:14.610414 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:49:02.698806', 'end': '2026-04-05 00:49:02.707135', 'delta': '0:00:00.008329', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:14.610433 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:49:03.189176', 'end': '2026-04-05 00:49:03.195779', 'delta': '0:00:00.006603', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:14.610441 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:49:03.986500', 'end': '2026-04-05 00:49:03.993684', 'delta': '0:00:00.007184', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:14.610470 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:49:02.904397', 'end': '2026-04-05 00:49:02.910388', 'delta': '0:00:00.005991', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:14.610477 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:49:04.661278', 'end': '2026-04-05 00:49:04.668480', 'delta': '0:00:00.007202', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:14.610490 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:49:04.288618', 'end': '2026-04-05 00:49:04.295154', 'delta': '0:00:00.006536', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:14.610503 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:49:04.786619', 'end': '2026-04-05 00:49:04.792134', 'delta': '0:00:00.005515', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:14.610572 | orchestrator | 2026-04-05 00:49:14.610583 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-05 00:49:14.610604 | orchestrator | Sunday 05 April 2026 00:49:07 +0000 (0:00:02.391) 0:00:13.830 ********** 2026-04-05 00:49:14.610620 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-05 00:49:14.610637 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-05 00:49:14.610651 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-05 00:49:14.610681 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-05 00:49:14.610691 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-05 00:49:14.610700 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-05 00:49:14.610711 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-05 00:49:14.610721 | orchestrator | 2026-04-05 00:49:14.610732 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-05 00:49:14.610742 | orchestrator | Sunday 05 April 2026 00:49:09 +0000 (0:00:01.736) 0:00:15.567 ********** 2026-04-05 00:49:14.610754 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-05 00:49:14.610765 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-05 00:49:14.610776 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-05 00:49:14.610788 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-05 00:49:14.610801 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-05 00:49:14.610849 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-05 00:49:14.610857 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-05 00:49:14.610863 | orchestrator | 2026-04-05 00:49:14.610900 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:49:14.610922 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:14.610934 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:14.610955 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:14.610965 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:14.611010 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:14.611024 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:14.611030 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:14.611037 | orchestrator | 2026-04-05 00:49:14.611043 | orchestrator | 2026-04-05 00:49:14.611049 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:49:14.611056 | orchestrator | Sunday 05 April 2026 00:49:13 +0000 (0:00:03.900) 0:00:19.467 ********** 2026-04-05 00:49:14.611172 | orchestrator | =============================================================================== 2026-04-05 00:49:14.611179 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 7.33s 2026-04-05 00:49:14.611186 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.90s 2026-04-05 00:49:14.611192 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.33s 2026-04-05 00:49:14.611198 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.39s 2026-04-05 00:49:14.611205 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.74s 2026-04-05 00:49:14.611212 | orchestrator | 2026-04-05 00:49:14 | INFO  | Task 50ba8e15-6b42-4331-a9e5-4dcf34d14bb0 is in state SUCCESS 2026-04-05 00:49:14.637421 | orchestrator | 2026-04-05 00:49:14 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:14.642816 | orchestrator | 2026-04-05 00:49:14 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:14.647147 | orchestrator | 2026-04-05 00:49:14 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:14.653620 | orchestrator | 2026-04-05 00:49:14 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:14.653702 | orchestrator | 2026-04-05 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:18.282671 | orchestrator | 2026-04-05 00:49:17 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:18.282728 | orchestrator | 2026-04-05 00:49:17 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:18.282736 | orchestrator | 2026-04-05 00:49:17 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:18.282744 | orchestrator | 2026-04-05 00:49:17 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:18.282751 | orchestrator | 2026-04-05 00:49:17 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:18.282757 | orchestrator | 2026-04-05 00:49:17 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:18.282764 | orchestrator | 2026-04-05 00:49:17 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:18.282771 | orchestrator | 2026-04-05 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:20.977031 | orchestrator | 2026-04-05 00:49:20 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:20.977127 | orchestrator | 2026-04-05 00:49:20 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:20.977134 | orchestrator | 2026-04-05 00:49:20 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:20.977139 | orchestrator | 2026-04-05 00:49:20 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:20.977143 | orchestrator | 2026-04-05 00:49:20 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:20.977147 | orchestrator | 2026-04-05 00:49:20 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:20.977151 | orchestrator | 2026-04-05 00:49:20 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:20.977155 | orchestrator | 2026-04-05 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:24.017071 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:24.023156 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:24.033478 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:24.040056 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:24.042302 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:24.044468 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:24.046768 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:24.046826 | orchestrator | 2026-04-05 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:27.148285 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:27.148432 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:27.148654 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:27.149206 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:27.150404 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:27.151333 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:27.152531 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:27.152551 | orchestrator | 2026-04-05 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:30.243260 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:30.244321 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:30.246364 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:30.248120 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:30.250069 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:30.251718 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:30.253191 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:30.253236 | orchestrator | 2026-04-05 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:33.367883 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:33.370156 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:33.373703 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:33.377295 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:33.379958 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:33.381438 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:33.382930 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:33.384416 | orchestrator | 2026-04-05 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:36.692886 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:36.692991 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:36.693007 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:36.693020 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:36.693031 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:36.693042 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:36.693053 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:36.693064 | orchestrator | 2026-04-05 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:39.933752 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:39.933845 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:39.934417 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:39.934445 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:39.934457 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:39.934482 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:39.935644 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:39.935674 | orchestrator | 2026-04-05 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:43.118214 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:43.128029 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state STARTED 2026-04-05 00:49:43.160864 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:43.168539 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:43.171099 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:43.172082 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:43.175471 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:43.175625 | orchestrator | 2026-04-05 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:46.273985 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:46.278277 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task d0ad8172-f585-4c23-bd7a-d5eef4c5bab3 is in state SUCCESS 2026-04-05 00:49:46.283946 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:46.291010 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:46.294156 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:46.300237 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:46.300297 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:46.300304 | orchestrator | 2026-04-05 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:49.507125 | orchestrator | 2026-04-05 00:49:49 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:49.507213 | orchestrator | 2026-04-05 00:49:49 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:49.507224 | orchestrator | 2026-04-05 00:49:49 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:49.507232 | orchestrator | 2026-04-05 00:49:49 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:49.507239 | orchestrator | 2026-04-05 00:49:49 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:49.507246 | orchestrator | 2026-04-05 00:49:49 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:49.507253 | orchestrator | 2026-04-05 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:52.491623 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:52.493728 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:52.496579 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:52.498679 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:52.502258 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:52.505382 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:52.505468 | orchestrator | 2026-04-05 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:55.792900 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:55.792953 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:55.792960 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state STARTED 2026-04-05 00:49:55.792965 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:55.792970 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:55.792975 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:55.792980 | orchestrator | 2026-04-05 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:58.652638 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:49:58.656609 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:49:58.657390 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 338fbf0c-7fdd-48d9-b1c5-9750a753dbd5 is in state SUCCESS 2026-04-05 00:49:58.659135 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:49:58.661389 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:49:58.664185 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:49:58.664260 | orchestrator | 2026-04-05 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:01.838134 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:01.840449 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:01.841047 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:01.843937 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:01.846621 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:01.846690 | orchestrator | 2026-04-05 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:04.931146 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:04.931992 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:04.944325 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:04.944383 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:04.945251 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:04.945282 | orchestrator | 2026-04-05 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:07.997186 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:07.999111 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:07.999209 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:08.001567 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:08.001623 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:08.001629 | orchestrator | 2026-04-05 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:11.043938 | orchestrator | 2026-04-05 00:50:11 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:11.046147 | orchestrator | 2026-04-05 00:50:11 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:11.047838 | orchestrator | 2026-04-05 00:50:11 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:11.049675 | orchestrator | 2026-04-05 00:50:11 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:11.051162 | orchestrator | 2026-04-05 00:50:11 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:11.051198 | orchestrator | 2026-04-05 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:14.096756 | orchestrator | 2026-04-05 00:50:14 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:14.100991 | orchestrator | 2026-04-05 00:50:14 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:14.104749 | orchestrator | 2026-04-05 00:50:14 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:14.107173 | orchestrator | 2026-04-05 00:50:14 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:14.109915 | orchestrator | 2026-04-05 00:50:14 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:14.110004 | orchestrator | 2026-04-05 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:17.156342 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:17.160129 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:17.161966 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:17.163049 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:17.168716 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:17.168795 | orchestrator | 2026-04-05 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:20.263603 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:20.270374 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:20.277568 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:20.278672 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:20.280765 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:20.280862 | orchestrator | 2026-04-05 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:23.352680 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:23.353919 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:23.359899 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:23.367301 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:23.377921 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:23.378109 | orchestrator | 2026-04-05 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:26.593018 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:26.593973 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:26.595795 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:26.606812 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:26.611828 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:26.612730 | orchestrator | 2026-04-05 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:29.679390 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:29.684957 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:29.688044 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:29.690747 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:29.694736 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:29.695848 | orchestrator | 2026-04-05 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:32.770077 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:32.770191 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:32.770225 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:32.770245 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:32.792225 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:32.792306 | orchestrator | 2026-04-05 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:35.819864 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:35.820603 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:35.826888 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:35.827050 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state STARTED 2026-04-05 00:50:35.827113 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:35.827133 | orchestrator | 2026-04-05 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:38.864365 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:38.865741 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:38.866667 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:38.867356 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task 16735f5b-8b93-4ae9-8c21-cfad5feabfca is in state SUCCESS 2026-04-05 00:50:38.867923 | orchestrator | 2026-04-05 00:50:38.867970 | orchestrator | 2026-04-05 00:50:38.867980 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-05 00:50:38.867988 | orchestrator | 2026-04-05 00:50:38.867995 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-05 00:50:38.868015 | orchestrator | Sunday 05 April 2026 00:48:54 +0000 (0:00:00.851) 0:00:00.851 ********** 2026-04-05 00:50:38.868022 | orchestrator | ok: [testbed-manager] => { 2026-04-05 00:50:38.868030 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-05 00:50:38.868037 | orchestrator | } 2026-04-05 00:50:38.868044 | orchestrator | 2026-04-05 00:50:38.868050 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-05 00:50:38.868057 | orchestrator | Sunday 05 April 2026 00:48:55 +0000 (0:00:00.644) 0:00:01.496 ********** 2026-04-05 00:50:38.868063 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:38.868070 | orchestrator | 2026-04-05 00:50:38.868076 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-05 00:50:38.868083 | orchestrator | Sunday 05 April 2026 00:49:00 +0000 (0:00:05.089) 0:00:06.586 ********** 2026-04-05 00:50:38.868097 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-05 00:50:38.868103 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-05 00:50:38.868110 | orchestrator | 2026-04-05 00:50:38.868116 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-05 00:50:38.868123 | orchestrator | Sunday 05 April 2026 00:49:04 +0000 (0:00:03.758) 0:00:10.344 ********** 2026-04-05 00:50:38.868129 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:38.868136 | orchestrator | 2026-04-05 00:50:38.868142 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-05 00:50:38.868148 | orchestrator | Sunday 05 April 2026 00:49:08 +0000 (0:00:04.759) 0:00:15.103 ********** 2026-04-05 00:50:38.868154 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:38.868266 | orchestrator | 2026-04-05 00:50:38.868277 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-05 00:50:38.868284 | orchestrator | Sunday 05 April 2026 00:49:10 +0000 (0:00:01.509) 0:00:16.612 ********** 2026-04-05 00:50:38.868290 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-05 00:50:38.868297 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:38.868306 | orchestrator | 2026-04-05 00:50:38.868316 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-05 00:50:38.868327 | orchestrator | Sunday 05 April 2026 00:49:38 +0000 (0:00:27.719) 0:00:44.332 ********** 2026-04-05 00:50:38.868337 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:38.868347 | orchestrator | 2026-04-05 00:50:38.868357 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:38.868368 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:38.868407 | orchestrator | 2026-04-05 00:50:38.868418 | orchestrator | 2026-04-05 00:50:38.868429 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:38.868438 | orchestrator | Sunday 05 April 2026 00:49:42 +0000 (0:00:04.698) 0:00:49.030 ********** 2026-04-05 00:50:38.868445 | orchestrator | =============================================================================== 2026-04-05 00:50:38.868451 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.72s 2026-04-05 00:50:38.868458 | orchestrator | osism.services.homer : Create traefik external network ------------------ 5.09s 2026-04-05 00:50:38.868464 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.76s 2026-04-05 00:50:38.868470 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.70s 2026-04-05 00:50:38.868476 | orchestrator | osism.services.homer : Create required directories ---------------------- 3.76s 2026-04-05 00:50:38.868483 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.51s 2026-04-05 00:50:38.868490 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.64s 2026-04-05 00:50:38.868517 | orchestrator | 2026-04-05 00:50:38.868524 | orchestrator | 2026-04-05 00:50:38.868531 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-05 00:50:38.868539 | orchestrator | 2026-04-05 00:50:38.868546 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-05 00:50:38.868553 | orchestrator | Sunday 05 April 2026 00:48:54 +0000 (0:00:00.912) 0:00:00.912 ********** 2026-04-05 00:50:38.868561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-05 00:50:38.868569 | orchestrator | 2026-04-05 00:50:38.868576 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-05 00:50:38.868584 | orchestrator | Sunday 05 April 2026 00:48:55 +0000 (0:00:01.117) 0:00:02.029 ********** 2026-04-05 00:50:38.868591 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-05 00:50:38.868598 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-05 00:50:38.868606 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-05 00:50:38.868613 | orchestrator | 2026-04-05 00:50:38.868621 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-05 00:50:38.868628 | orchestrator | Sunday 05 April 2026 00:49:00 +0000 (0:00:05.108) 0:00:07.138 ********** 2026-04-05 00:50:38.868635 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:38.868643 | orchestrator | 2026-04-05 00:50:38.868650 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-05 00:50:38.868657 | orchestrator | Sunday 05 April 2026 00:49:04 +0000 (0:00:04.337) 0:00:11.475 ********** 2026-04-05 00:50:38.868679 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-05 00:50:38.868690 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:38.868699 | orchestrator | 2026-04-05 00:50:38.868708 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-05 00:50:38.868719 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:00:40.061) 0:00:51.537 ********** 2026-04-05 00:50:38.868732 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:38.868741 | orchestrator | 2026-04-05 00:50:38.868752 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-05 00:50:38.868759 | orchestrator | Sunday 05 April 2026 00:49:46 +0000 (0:00:02.241) 0:00:53.779 ********** 2026-04-05 00:50:38.868766 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:38.868772 | orchestrator | 2026-04-05 00:50:38.868778 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-05 00:50:38.868785 | orchestrator | Sunday 05 April 2026 00:49:50 +0000 (0:00:03.339) 0:00:57.118 ********** 2026-04-05 00:50:38.868791 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:38.868809 | orchestrator | 2026-04-05 00:50:38.868816 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-05 00:50:38.868824 | orchestrator | Sunday 05 April 2026 00:49:53 +0000 (0:00:03.402) 0:01:00.522 ********** 2026-04-05 00:50:38.868835 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:38.868841 | orchestrator | 2026-04-05 00:50:38.868848 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-05 00:50:38.868854 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:02.609) 0:01:03.131 ********** 2026-04-05 00:50:38.868860 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:38.868866 | orchestrator | 2026-04-05 00:50:38.868873 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-05 00:50:38.868879 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.706) 0:01:03.837 ********** 2026-04-05 00:50:38.868889 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:38.868897 | orchestrator | 2026-04-05 00:50:38.868903 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:38.868910 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:38.868918 | orchestrator | 2026-04-05 00:50:38.868927 | orchestrator | 2026-04-05 00:50:38.868934 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:38.868940 | orchestrator | Sunday 05 April 2026 00:49:57 +0000 (0:00:00.590) 0:01:04.427 ********** 2026-04-05 00:50:38.868946 | orchestrator | =============================================================================== 2026-04-05 00:50:38.868952 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.06s 2026-04-05 00:50:38.868962 | orchestrator | osism.services.openstackclient : Create required directories ------------ 5.11s 2026-04-05 00:50:38.868970 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 4.34s 2026-04-05 00:50:38.868976 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.40s 2026-04-05 00:50:38.868982 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 3.34s 2026-04-05 00:50:38.868988 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.61s 2026-04-05 00:50:38.869003 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.24s 2026-04-05 00:50:38.869009 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.12s 2026-04-05 00:50:38.869065 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.71s 2026-04-05 00:50:38.869073 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.59s 2026-04-05 00:50:38.869168 | orchestrator | 2026-04-05 00:50:38.869176 | orchestrator | 2026-04-05 00:50:38.869182 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-05 00:50:38.869188 | orchestrator | 2026-04-05 00:50:38.869195 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-05 00:50:38.869201 | orchestrator | Sunday 05 April 2026 00:49:19 +0000 (0:00:00.621) 0:00:00.621 ********** 2026-04-05 00:50:38.869207 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:38.869213 | orchestrator | 2026-04-05 00:50:38.869219 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-05 00:50:38.869225 | orchestrator | Sunday 05 April 2026 00:49:21 +0000 (0:00:02.031) 0:00:02.652 ********** 2026-04-05 00:50:38.869231 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-05 00:50:38.869237 | orchestrator | 2026-04-05 00:50:38.869244 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-05 00:50:38.869252 | orchestrator | Sunday 05 April 2026 00:49:23 +0000 (0:00:01.742) 0:00:04.395 ********** 2026-04-05 00:50:38.869263 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:38.869271 | orchestrator | 2026-04-05 00:50:38.869277 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-05 00:50:38.869287 | orchestrator | Sunday 05 April 2026 00:49:26 +0000 (0:00:03.388) 0:00:07.783 ********** 2026-04-05 00:50:38.869304 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-05 00:50:38.869310 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:38.869317 | orchestrator | 2026-04-05 00:50:38.869323 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-05 00:50:38.869329 | orchestrator | Sunday 05 April 2026 00:50:28 +0000 (0:01:01.755) 0:01:09.538 ********** 2026-04-05 00:50:38.869335 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:38.869341 | orchestrator | 2026-04-05 00:50:38.869348 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:38.869354 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:38.869360 | orchestrator | 2026-04-05 00:50:38.869366 | orchestrator | 2026-04-05 00:50:38.869372 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:38.869384 | orchestrator | Sunday 05 April 2026 00:50:37 +0000 (0:00:08.625) 0:01:18.164 ********** 2026-04-05 00:50:38.869390 | orchestrator | =============================================================================== 2026-04-05 00:50:38.869397 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 61.76s 2026-04-05 00:50:38.869403 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 8.62s 2026-04-05 00:50:38.869409 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 3.39s 2026-04-05 00:50:38.869415 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.03s 2026-04-05 00:50:38.869422 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.74s 2026-04-05 00:50:38.869521 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:38.869531 | orchestrator | 2026-04-05 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:41.914120 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:41.915447 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:41.918982 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:41.921959 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:41.922067 | orchestrator | 2026-04-05 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:44.973459 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:44.982834 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:44.984403 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:44.987101 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:44.987206 | orchestrator | 2026-04-05 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:48.026694 | orchestrator | 2026-04-05 00:50:48 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:48.028270 | orchestrator | 2026-04-05 00:50:48 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:48.030844 | orchestrator | 2026-04-05 00:50:48 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:48.032303 | orchestrator | 2026-04-05 00:50:48 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state STARTED 2026-04-05 00:50:48.032376 | orchestrator | 2026-04-05 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:51.079311 | orchestrator | 2026-04-05 00:50:51 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:51.080047 | orchestrator | 2026-04-05 00:50:51 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:51.080839 | orchestrator | 2026-04-05 00:50:51 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:51.081632 | orchestrator | 2026-04-05 00:50:51 | INFO  | Task 0131e6bf-503d-429c-aca1-166a2bc1f384 is in state SUCCESS 2026-04-05 00:50:51.081924 | orchestrator | 2026-04-05 00:50:51.082546 | orchestrator | 2026-04-05 00:50:51.082565 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:50:51.082574 | orchestrator | 2026-04-05 00:50:51.082581 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:50:51.082588 | orchestrator | Sunday 05 April 2026 00:48:54 +0000 (0:00:00.707) 0:00:00.707 ********** 2026-04-05 00:50:51.082595 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-05 00:50:51.082602 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-05 00:50:51.082609 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-05 00:50:51.082616 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-05 00:50:51.082622 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-05 00:50:51.082629 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-05 00:50:51.082635 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-05 00:50:51.082642 | orchestrator | 2026-04-05 00:50:51.082648 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-05 00:50:51.082655 | orchestrator | 2026-04-05 00:50:51.082662 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-05 00:50:51.082668 | orchestrator | Sunday 05 April 2026 00:48:57 +0000 (0:00:03.234) 0:00:03.942 ********** 2026-04-05 00:50:51.082684 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:50:51.082692 | orchestrator | 2026-04-05 00:50:51.082698 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-05 00:50:51.082705 | orchestrator | Sunday 05 April 2026 00:49:01 +0000 (0:00:03.423) 0:00:07.365 ********** 2026-04-05 00:50:51.082715 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:51.082722 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:51.082729 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:51.082735 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:51.082742 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:51.082748 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:51.082755 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:51.082761 | orchestrator | 2026-04-05 00:50:51.082768 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-05 00:50:51.082774 | orchestrator | Sunday 05 April 2026 00:49:04 +0000 (0:00:03.655) 0:00:11.021 ********** 2026-04-05 00:50:51.082781 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:51.082787 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:51.082794 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:51.082800 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:51.082807 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:51.082813 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:51.082820 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:51.082826 | orchestrator | 2026-04-05 00:50:51.082833 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-05 00:50:51.082839 | orchestrator | Sunday 05 April 2026 00:49:09 +0000 (0:00:04.926) 0:00:15.947 ********** 2026-04-05 00:50:51.082858 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:51.082865 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:51.082871 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:51.082878 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:51.082884 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:51.082890 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:51.082897 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:51.082904 | orchestrator | 2026-04-05 00:50:51.082910 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-05 00:50:51.082917 | orchestrator | Sunday 05 April 2026 00:49:11 +0000 (0:00:02.214) 0:00:18.161 ********** 2026-04-05 00:50:51.082923 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:51.082930 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:51.082936 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:51.082943 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:51.082949 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:51.082955 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:51.082961 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:51.082967 | orchestrator | 2026-04-05 00:50:51.082973 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-05 00:50:51.082979 | orchestrator | Sunday 05 April 2026 00:49:29 +0000 (0:00:17.989) 0:00:36.151 ********** 2026-04-05 00:50:51.082985 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:51.082991 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:51.082996 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:51.083002 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:51.083009 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:51.083016 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:51.083021 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:51.083028 | orchestrator | 2026-04-05 00:50:51.083034 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-05 00:50:51.083040 | orchestrator | Sunday 05 April 2026 00:50:15 +0000 (0:00:46.051) 0:01:22.203 ********** 2026-04-05 00:50:51.083047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:50:51.083053 | orchestrator | 2026-04-05 00:50:51.083059 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-05 00:50:51.083065 | orchestrator | Sunday 05 April 2026 00:50:18 +0000 (0:00:02.291) 0:01:24.495 ********** 2026-04-05 00:50:51.083071 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-05 00:50:51.083077 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-05 00:50:51.083083 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-05 00:50:51.083090 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-05 00:50:51.083104 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-05 00:50:51.083111 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-05 00:50:51.083117 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-05 00:50:51.083123 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-05 00:50:51.083129 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-05 00:50:51.083136 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-05 00:50:51.083143 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-05 00:50:51.083150 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-05 00:50:51.083156 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-05 00:50:51.083164 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-05 00:50:51.083172 | orchestrator | 2026-04-05 00:50:51.083180 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-05 00:50:51.083188 | orchestrator | Sunday 05 April 2026 00:50:25 +0000 (0:00:07.538) 0:01:32.034 ********** 2026-04-05 00:50:51.083203 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:51.083210 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:51.083217 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:51.083227 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:51.083234 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:51.083241 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:51.083247 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:51.083254 | orchestrator | 2026-04-05 00:50:51.083262 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-05 00:50:51.083270 | orchestrator | Sunday 05 April 2026 00:50:27 +0000 (0:00:01.893) 0:01:33.928 ********** 2026-04-05 00:50:51.083277 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:51.083288 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:51.083296 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:51.083303 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:51.083310 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:51.083317 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:51.083329 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:51.083336 | orchestrator | 2026-04-05 00:50:51.083343 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-05 00:50:51.083351 | orchestrator | Sunday 05 April 2026 00:50:29 +0000 (0:00:02.197) 0:01:36.125 ********** 2026-04-05 00:50:51.083359 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:51.083366 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:51.083373 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:51.083380 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:51.083387 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:51.083395 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:51.083402 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:51.083409 | orchestrator | 2026-04-05 00:50:51.083416 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-05 00:50:51.083423 | orchestrator | Sunday 05 April 2026 00:50:31 +0000 (0:00:01.841) 0:01:37.966 ********** 2026-04-05 00:50:51.083429 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:51.083435 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:51.083441 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:51.083447 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:51.083454 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:51.083462 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:51.083468 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:51.083475 | orchestrator | 2026-04-05 00:50:51.083482 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-05 00:50:51.083489 | orchestrator | Sunday 05 April 2026 00:50:33 +0000 (0:00:02.223) 0:01:40.190 ********** 2026-04-05 00:50:51.083512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-05 00:50:51.083520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:50:51.083528 | orchestrator | 2026-04-05 00:50:51.083534 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-05 00:50:51.083541 | orchestrator | Sunday 05 April 2026 00:50:35 +0000 (0:00:01.662) 0:01:41.853 ********** 2026-04-05 00:50:51.083547 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:51.083553 | orchestrator | 2026-04-05 00:50:51.083560 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-05 00:50:51.083566 | orchestrator | Sunday 05 April 2026 00:50:38 +0000 (0:00:02.642) 0:01:44.495 ********** 2026-04-05 00:50:51.083572 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:51.083578 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:51.083585 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:51.083591 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:51.083603 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:51.083610 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:51.083616 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:51.083622 | orchestrator | 2026-04-05 00:50:51.083629 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:51.083635 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:51.083642 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:51.083649 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:51.083655 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:51.083669 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:51.083675 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:51.083682 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:51.083688 | orchestrator | 2026-04-05 00:50:51.083694 | orchestrator | 2026-04-05 00:50:51.083700 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:51.083707 | orchestrator | Sunday 05 April 2026 00:50:49 +0000 (0:00:11.547) 0:01:56.043 ********** 2026-04-05 00:50:51.083713 | orchestrator | =============================================================================== 2026-04-05 00:50:51.083719 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 46.05s 2026-04-05 00:50:51.083725 | orchestrator | osism.services.netdata : Add repository -------------------------------- 17.99s 2026-04-05 00:50:51.083732 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.55s 2026-04-05 00:50:51.083738 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.54s 2026-04-05 00:50:51.083744 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.93s 2026-04-05 00:50:51.083751 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.66s 2026-04-05 00:50:51.083757 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.42s 2026-04-05 00:50:51.083763 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.23s 2026-04-05 00:50:51.083769 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.64s 2026-04-05 00:50:51.083779 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.29s 2026-04-05 00:50:51.083786 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.22s 2026-04-05 00:50:51.083792 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.21s 2026-04-05 00:50:51.083797 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.20s 2026-04-05 00:50:51.083804 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.89s 2026-04-05 00:50:51.083810 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.84s 2026-04-05 00:50:51.083817 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.66s 2026-04-05 00:50:51.083823 | orchestrator | 2026-04-05 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:54.130222 | orchestrator | 2026-04-05 00:50:54 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:54.133646 | orchestrator | 2026-04-05 00:50:54 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:54.137405 | orchestrator | 2026-04-05 00:50:54 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:54.139279 | orchestrator | 2026-04-05 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:57.182945 | orchestrator | 2026-04-05 00:50:57 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:50:57.187129 | orchestrator | 2026-04-05 00:50:57 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:50:57.189021 | orchestrator | 2026-04-05 00:50:57 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:50:57.189103 | orchestrator | 2026-04-05 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:00.264382 | orchestrator | 2026-04-05 00:51:00 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:00.269817 | orchestrator | 2026-04-05 00:51:00 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:00.274731 | orchestrator | 2026-04-05 00:51:00 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:00.274899 | orchestrator | 2026-04-05 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:03.363747 | orchestrator | 2026-04-05 00:51:03 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:03.366947 | orchestrator | 2026-04-05 00:51:03 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:03.369157 | orchestrator | 2026-04-05 00:51:03 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:03.369228 | orchestrator | 2026-04-05 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:06.428973 | orchestrator | 2026-04-05 00:51:06 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:06.429952 | orchestrator | 2026-04-05 00:51:06 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:06.431616 | orchestrator | 2026-04-05 00:51:06 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:06.431809 | orchestrator | 2026-04-05 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:09.494127 | orchestrator | 2026-04-05 00:51:09 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:09.496768 | orchestrator | 2026-04-05 00:51:09 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:09.499265 | orchestrator | 2026-04-05 00:51:09 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:09.499913 | orchestrator | 2026-04-05 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:12.563184 | orchestrator | 2026-04-05 00:51:12 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:12.565754 | orchestrator | 2026-04-05 00:51:12 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:12.567744 | orchestrator | 2026-04-05 00:51:12 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:12.567836 | orchestrator | 2026-04-05 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:15.633881 | orchestrator | 2026-04-05 00:51:15 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:15.636382 | orchestrator | 2026-04-05 00:51:15 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:15.638677 | orchestrator | 2026-04-05 00:51:15 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:15.639086 | orchestrator | 2026-04-05 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:18.691857 | orchestrator | 2026-04-05 00:51:18 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:18.695271 | orchestrator | 2026-04-05 00:51:18 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:18.696394 | orchestrator | 2026-04-05 00:51:18 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:18.696439 | orchestrator | 2026-04-05 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:21.744648 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:21.746315 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:21.750350 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:21.750437 | orchestrator | 2026-04-05 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:24.797451 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:24.800198 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:24.802456 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:24.803564 | orchestrator | 2026-04-05 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:27.876734 | orchestrator | 2026-04-05 00:51:27 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:27.879957 | orchestrator | 2026-04-05 00:51:27 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:27.880999 | orchestrator | 2026-04-05 00:51:27 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:27.881100 | orchestrator | 2026-04-05 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:30.925013 | orchestrator | 2026-04-05 00:51:30 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:30.926415 | orchestrator | 2026-04-05 00:51:30 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:30.927982 | orchestrator | 2026-04-05 00:51:30 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:30.928041 | orchestrator | 2026-04-05 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:33.968989 | orchestrator | 2026-04-05 00:51:33 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:33.969719 | orchestrator | 2026-04-05 00:51:33 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:33.970931 | orchestrator | 2026-04-05 00:51:33 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:33.970977 | orchestrator | 2026-04-05 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:37.024098 | orchestrator | 2026-04-05 00:51:37 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:37.024747 | orchestrator | 2026-04-05 00:51:37 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:37.026548 | orchestrator | 2026-04-05 00:51:37 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:37.026662 | orchestrator | 2026-04-05 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:40.061839 | orchestrator | 2026-04-05 00:51:40 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:40.063074 | orchestrator | 2026-04-05 00:51:40 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:40.064925 | orchestrator | 2026-04-05 00:51:40 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:40.064970 | orchestrator | 2026-04-05 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:43.101381 | orchestrator | 2026-04-05 00:51:43 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:43.103966 | orchestrator | 2026-04-05 00:51:43 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:43.105840 | orchestrator | 2026-04-05 00:51:43 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:43.105888 | orchestrator | 2026-04-05 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:46.167345 | orchestrator | 2026-04-05 00:51:46 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:46.169390 | orchestrator | 2026-04-05 00:51:46 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state STARTED 2026-04-05 00:51:46.171556 | orchestrator | 2026-04-05 00:51:46 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:46.171603 | orchestrator | 2026-04-05 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:49.214194 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:51:49.214311 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:49.218853 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:51:49.219777 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:51:49.220723 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task 7f26a003-a3ed-4734-889e-bba26358657b is in state STARTED 2026-04-05 00:51:49.227432 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task 4fd79e15-2db5-43fb-bcb7-72aac79cd865 is in state SUCCESS 2026-04-05 00:51:49.231205 | orchestrator | 2026-04-05 00:51:49.231264 | orchestrator | 2026-04-05 00:51:49.231277 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-05 00:51:49.231288 | orchestrator | 2026-04-05 00:51:49.231298 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-05 00:51:49.231310 | orchestrator | Sunday 05 April 2026 00:48:46 +0000 (0:00:00.495) 0:00:00.495 ********** 2026-04-05 00:51:49.231321 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:51:49.231332 | orchestrator | 2026-04-05 00:51:49.231342 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-05 00:51:49.231374 | orchestrator | Sunday 05 April 2026 00:48:47 +0000 (0:00:01.575) 0:00:02.071 ********** 2026-04-05 00:51:49.231386 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:51:49.231396 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:51:49.231406 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:51:49.231416 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:51:49.231474 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:51:49.231487 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:51:49.231496 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:51:49.231506 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:51:49.231515 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:51:49.231525 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:51:49.231536 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:51:49.231545 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:51:49.231555 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:51:49.231565 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:51:49.231574 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:51:49.231584 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:51:49.231593 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:51:49.231603 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:51:49.231612 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:51:49.231622 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:51:49.231631 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:51:49.231641 | orchestrator | 2026-04-05 00:51:49.231651 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-05 00:51:49.231660 | orchestrator | Sunday 05 April 2026 00:48:52 +0000 (0:00:05.020) 0:00:07.091 ********** 2026-04-05 00:51:49.231677 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:51:49.231689 | orchestrator | 2026-04-05 00:51:49.231698 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-05 00:51:49.231708 | orchestrator | Sunday 05 April 2026 00:48:54 +0000 (0:00:01.663) 0:00:08.755 ********** 2026-04-05 00:51:49.231722 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.231737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.231768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.231786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.231798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.231809 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.231821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.231834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.231853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.231880 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.231907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.231924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.231942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.231960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.231996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.232015 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.232033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.232071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.232090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.232107 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.232126 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.232143 | orchestrator | 2026-04-05 00:51:49.232160 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-05 00:51:49.232176 | orchestrator | Sunday 05 April 2026 00:49:03 +0000 (0:00:08.974) 0:00:17.729 ********** 2026-04-05 00:51:49.232192 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232249 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232334 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232559 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:51:49.232585 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:49.232595 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:49.232605 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:49.232624 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:49.232641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232688 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:49.232698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232708 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:49.232717 | orchestrator | 2026-04-05 00:51:49.232727 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-05 00:51:49.232737 | orchestrator | Sunday 05 April 2026 00:49:06 +0000 (0:00:03.308) 0:00:21.038 ********** 2026-04-05 00:51:49.232747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232836 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232876 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:49.232887 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232902 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:49.232912 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:49.232926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232952 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.232962 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:51:49.232972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.232992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.233001 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:49.233011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.233027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.233036 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:49.233046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.233056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.233066 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:49.233076 | orchestrator | 2026-04-05 00:51:49.233085 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-05 00:51:49.233095 | orchestrator | Sunday 05 April 2026 00:49:10 +0000 (0:00:03.674) 0:00:24.713 ********** 2026-04-05 00:51:49.233105 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:51:49.233114 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:49.233124 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:49.233134 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:49.233143 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:49.233158 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:49.233168 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:49.233177 | orchestrator | 2026-04-05 00:51:49.233187 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-05 00:51:49.233196 | orchestrator | Sunday 05 April 2026 00:49:12 +0000 (0:00:01.656) 0:00:26.369 ********** 2026-04-05 00:51:49.233206 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:51:49.233215 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:49.233225 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:49.233234 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:49.233244 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:49.233253 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:49.233263 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:49.233272 | orchestrator | 2026-04-05 00:51:49.233282 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-05 00:51:49.233291 | orchestrator | Sunday 05 April 2026 00:49:14 +0000 (0:00:01.932) 0:00:28.302 ********** 2026-04-05 00:51:49.233301 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:51:49.233310 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:49.233319 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:49.233329 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:49.233338 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:49.233348 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:49.233357 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:49.233366 | orchestrator | 2026-04-05 00:51:49.233376 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-05 00:51:49.233391 | orchestrator | Sunday 05 April 2026 00:49:16 +0000 (0:00:02.842) 0:00:31.145 ********** 2026-04-05 00:51:49.233401 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:49.233410 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:49.233420 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:49.233429 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:49.233438 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:49.233475 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:49.233486 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:49.233496 | orchestrator | 2026-04-05 00:51:49.233506 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-05 00:51:49.233515 | orchestrator | Sunday 05 April 2026 00:49:20 +0000 (0:00:03.846) 0:00:34.991 ********** 2026-04-05 00:51:49.233525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.233541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.233556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.233566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.233582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.233608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233618 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.233628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.233663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233739 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233782 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.233796 | orchestrator | 2026-04-05 00:51:49.233806 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-05 00:51:49.233816 | orchestrator | Sunday 05 April 2026 00:49:29 +0000 (0:00:08.484) 0:00:43.476 ********** 2026-04-05 00:51:49.233826 | orchestrator | [WARNING]: Skipped 2026-04-05 00:51:49.233837 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-05 00:51:49.233847 | orchestrator | to this access issue: 2026-04-05 00:51:49.233856 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-05 00:51:49.233865 | orchestrator | directory 2026-04-05 00:51:49.233875 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:51:49.233885 | orchestrator | 2026-04-05 00:51:49.233895 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-05 00:51:49.233904 | orchestrator | Sunday 05 April 2026 00:49:30 +0000 (0:00:01.593) 0:00:45.070 ********** 2026-04-05 00:51:49.233914 | orchestrator | [WARNING]: Skipped 2026-04-05 00:51:49.233923 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-05 00:51:49.233933 | orchestrator | to this access issue: 2026-04-05 00:51:49.233942 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-05 00:51:49.233952 | orchestrator | directory 2026-04-05 00:51:49.233961 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:51:49.233971 | orchestrator | 2026-04-05 00:51:49.233980 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-05 00:51:49.233990 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:01.151) 0:00:46.222 ********** 2026-04-05 00:51:49.233999 | orchestrator | [WARNING]: Skipped 2026-04-05 00:51:49.234009 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-05 00:51:49.234103 | orchestrator | to this access issue: 2026-04-05 00:51:49.234124 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-05 00:51:49.234141 | orchestrator | directory 2026-04-05 00:51:49.234157 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:51:49.234173 | orchestrator | 2026-04-05 00:51:49.234189 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-05 00:51:49.234206 | orchestrator | Sunday 05 April 2026 00:49:33 +0000 (0:00:01.619) 0:00:47.841 ********** 2026-04-05 00:51:49.234222 | orchestrator | [WARNING]: Skipped 2026-04-05 00:51:49.234240 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-05 00:51:49.234258 | orchestrator | to this access issue: 2026-04-05 00:51:49.234274 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-05 00:51:49.234289 | orchestrator | directory 2026-04-05 00:51:49.234300 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:51:49.234309 | orchestrator | 2026-04-05 00:51:49.234319 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-05 00:51:49.234328 | orchestrator | Sunday 05 April 2026 00:49:35 +0000 (0:00:01.840) 0:00:49.681 ********** 2026-04-05 00:51:49.234337 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:49.234347 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:49.234356 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:49.234365 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:49.234375 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:49.234385 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:49.234394 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:49.234403 | orchestrator | 2026-04-05 00:51:49.234417 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-05 00:51:49.234433 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:00:08.803) 0:00:58.484 ********** 2026-04-05 00:51:49.234482 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:51:49.234509 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:51:49.234537 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:51:49.234553 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:51:49.234568 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:51:49.234581 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:51:49.234596 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:51:49.234612 | orchestrator | 2026-04-05 00:51:49.234629 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-05 00:51:49.234644 | orchestrator | Sunday 05 April 2026 00:49:50 +0000 (0:00:06.227) 0:01:04.712 ********** 2026-04-05 00:51:49.234662 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:49.234679 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:49.234695 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:49.234709 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:49.234719 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:49.234729 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:49.234738 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:49.234747 | orchestrator | 2026-04-05 00:51:49.234757 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-05 00:51:49.234767 | orchestrator | Sunday 05 April 2026 00:49:53 +0000 (0:00:02.932) 0:01:07.645 ********** 2026-04-05 00:51:49.234791 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.234803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.234813 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.234823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.234841 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.234861 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.234871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.234881 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.234898 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.234909 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.234919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.234929 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.234945 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.234959 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.234970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.234984 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.234995 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.235005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.235015 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235025 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235042 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235052 | orchestrator | 2026-04-05 00:51:49.235062 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-05 00:51:49.235071 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:03.384) 0:01:11.029 ********** 2026-04-05 00:51:49.235081 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:51:49.235095 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:51:49.235105 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:51:49.235114 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:51:49.235124 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:51:49.235133 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:51:49.235143 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:51:49.235152 | orchestrator | 2026-04-05 00:51:49.235162 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-05 00:51:49.235171 | orchestrator | Sunday 05 April 2026 00:50:00 +0000 (0:00:03.381) 0:01:14.411 ********** 2026-04-05 00:51:49.235181 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:51:49.235190 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:51:49.235200 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:51:49.235210 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:51:49.235219 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:51:49.235228 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:51:49.235243 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:51:49.235264 | orchestrator | 2026-04-05 00:51:49.235296 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-05 00:51:49.235312 | orchestrator | Sunday 05 April 2026 00:50:03 +0000 (0:00:03.002) 0:01:17.414 ********** 2026-04-05 00:51:49.235329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.235345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.235375 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.235393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.235411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.235436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235494 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.235522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235565 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235576 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:51:49.235607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235659 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235682 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:49.235760 | orchestrator | 2026-04-05 00:51:49.235776 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-05 00:51:49.235792 | orchestrator | Sunday 05 April 2026 00:50:07 +0000 (0:00:04.118) 0:01:21.533 ********** 2026-04-05 00:51:49.235807 | orchestrator | changed: [testbed-manager] => { 2026-04-05 00:51:49.235822 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:49.235838 | orchestrator | } 2026-04-05 00:51:49.235854 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:51:49.235868 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:49.235882 | orchestrator | } 2026-04-05 00:51:49.235897 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:51:49.235911 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:49.235926 | orchestrator | } 2026-04-05 00:51:49.235940 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:51:49.235955 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:49.235970 | orchestrator | } 2026-04-05 00:51:49.235984 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 00:51:49.235998 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:49.236013 | orchestrator | } 2026-04-05 00:51:49.236027 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 00:51:49.236042 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:49.236057 | orchestrator | } 2026-04-05 00:51:49.236073 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 00:51:49.236087 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:49.236102 | orchestrator | } 2026-04-05 00:51:49.236131 | orchestrator | 2026-04-05 00:51:49.236147 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:51:49.236163 | orchestrator | Sunday 05 April 2026 00:50:08 +0000 (0:00:00.906) 0:01:22.440 ********** 2026-04-05 00:51:49.236192 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.236209 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236226 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236244 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:51:49.236260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.236276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.236349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236367 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:49.236392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236408 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:49.236424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.236439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236500 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:49.236515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.236539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236588 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:49.236614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.236632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236663 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:49.236680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:51:49.236696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:49.236731 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:49.236748 | orchestrator | 2026-04-05 00:51:49.236771 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-05 00:51:49.236800 | orchestrator | Sunday 05 April 2026 00:50:10 +0000 (0:00:01.760) 0:01:24.200 ********** 2026-04-05 00:51:49.236816 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:49.236834 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:49.236851 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:49.236866 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:49.236884 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:49.236901 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:49.236916 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:49.236932 | orchestrator | 2026-04-05 00:51:49.236950 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-05 00:51:49.236967 | orchestrator | Sunday 05 April 2026 00:50:11 +0000 (0:00:01.399) 0:01:25.600 ********** 2026-04-05 00:51:49.236983 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:49.236998 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:49.237008 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:49.237018 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:49.237027 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:49.237036 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:49.237046 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:49.237056 | orchestrator | 2026-04-05 00:51:49.237065 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:51:49.237075 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:01.293) 0:01:26.893 ********** 2026-04-05 00:51:49.237084 | orchestrator | 2026-04-05 00:51:49.237094 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:51:49.237103 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:00.080) 0:01:26.973 ********** 2026-04-05 00:51:49.237113 | orchestrator | 2026-04-05 00:51:49.237122 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:51:49.237132 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:00.081) 0:01:27.055 ********** 2026-04-05 00:51:49.237141 | orchestrator | 2026-04-05 00:51:49.237160 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:51:49.237170 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:00.072) 0:01:27.127 ********** 2026-04-05 00:51:49.237180 | orchestrator | 2026-04-05 00:51:49.237190 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:51:49.237199 | orchestrator | Sunday 05 April 2026 00:50:13 +0000 (0:00:00.075) 0:01:27.203 ********** 2026-04-05 00:51:49.237209 | orchestrator | 2026-04-05 00:51:49.237219 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:51:49.237228 | orchestrator | Sunday 05 April 2026 00:50:13 +0000 (0:00:00.091) 0:01:27.295 ********** 2026-04-05 00:51:49.237238 | orchestrator | 2026-04-05 00:51:49.237247 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:51:49.237257 | orchestrator | Sunday 05 April 2026 00:50:13 +0000 (0:00:00.078) 0:01:27.373 ********** 2026-04-05 00:51:49.237266 | orchestrator | 2026-04-05 00:51:49.237276 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-05 00:51:49.237285 | orchestrator | Sunday 05 April 2026 00:50:13 +0000 (0:00:00.093) 0:01:27.467 ********** 2026-04-05 00:51:49.237295 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:49.237305 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:49.237315 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:49.237324 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:49.237334 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:49.237344 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:49.237353 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:49.237363 | orchestrator | 2026-04-05 00:51:49.237372 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-05 00:51:49.237382 | orchestrator | Sunday 05 April 2026 00:50:47 +0000 (0:00:34.007) 0:02:01.475 ********** 2026-04-05 00:51:49.237392 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:49.237411 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:49.237421 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:49.237431 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:49.237441 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:49.237482 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:49.237493 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:49.237502 | orchestrator | 2026-04-05 00:51:49.237512 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-05 00:51:49.237521 | orchestrator | Sunday 05 April 2026 00:51:34 +0000 (0:00:47.512) 0:02:48.987 ********** 2026-04-05 00:51:49.237531 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:49.237541 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:49.237550 | orchestrator | ok: [testbed-manager] 2026-04-05 00:51:49.237560 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:49.237569 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:51:49.237579 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:51:49.237588 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:51:49.237598 | orchestrator | 2026-04-05 00:51:49.237607 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-05 00:51:49.237617 | orchestrator | Sunday 05 April 2026 00:51:37 +0000 (0:00:02.385) 0:02:51.373 ********** 2026-04-05 00:51:49.237627 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:49.237636 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:49.237646 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:49.237655 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:49.237664 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:49.237674 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:49.237683 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:49.237693 | orchestrator | 2026-04-05 00:51:49.237702 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:51:49.237714 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 00:51:49.237725 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 00:51:49.237742 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 00:51:49.237752 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 00:51:49.237761 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 00:51:49.237771 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 00:51:49.237781 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 00:51:49.237790 | orchestrator | 2026-04-05 00:51:49.237800 | orchestrator | 2026-04-05 00:51:49.237810 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:51:49.237820 | orchestrator | Sunday 05 April 2026 00:51:47 +0000 (0:00:10.423) 0:03:01.797 ********** 2026-04-05 00:51:49.237829 | orchestrator | =============================================================================== 2026-04-05 00:51:49.237839 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 47.51s 2026-04-05 00:51:49.237849 | orchestrator | common : Restart fluentd container ------------------------------------- 34.01s 2026-04-05 00:51:49.237859 | orchestrator | common : Restart cron container ---------------------------------------- 10.42s 2026-04-05 00:51:49.237868 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 8.97s 2026-04-05 00:51:49.237902 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 8.80s 2026-04-05 00:51:49.237919 | orchestrator | common : Copying over config.json files for services -------------------- 8.48s 2026-04-05 00:51:49.237934 | orchestrator | common : Copying over cron logrotate config file ------------------------ 6.23s 2026-04-05 00:51:49.237950 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.02s 2026-04-05 00:51:49.237968 | orchestrator | service-check-containers : common | Check containers -------------------- 4.12s 2026-04-05 00:51:49.237985 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.85s 2026-04-05 00:51:49.238002 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.67s 2026-04-05 00:51:49.238089 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.38s 2026-04-05 00:51:49.238104 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.38s 2026-04-05 00:51:49.238114 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.31s 2026-04-05 00:51:49.238125 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.00s 2026-04-05 00:51:49.238134 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.93s 2026-04-05 00:51:49.238144 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.84s 2026-04-05 00:51:49.238154 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.39s 2026-04-05 00:51:49.238164 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.93s 2026-04-05 00:51:49.238173 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.84s 2026-04-05 00:51:49.242149 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:49.242222 | orchestrator | 2026-04-05 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:52.275298 | orchestrator | 2026-04-05 00:51:52 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:51:52.276125 | orchestrator | 2026-04-05 00:51:52 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:52.276979 | orchestrator | 2026-04-05 00:51:52 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:51:52.277751 | orchestrator | 2026-04-05 00:51:52 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:51:52.281082 | orchestrator | 2026-04-05 00:51:52 | INFO  | Task 7f26a003-a3ed-4734-889e-bba26358657b is in state STARTED 2026-04-05 00:51:52.282004 | orchestrator | 2026-04-05 00:51:52 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:52.282083 | orchestrator | 2026-04-05 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:55.314944 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:51:55.318498 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:55.318582 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:51:55.319236 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:51:55.320034 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task 7f26a003-a3ed-4734-889e-bba26358657b is in state STARTED 2026-04-05 00:51:55.321054 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:55.321078 | orchestrator | 2026-04-05 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:58.381885 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:51:58.382533 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:51:58.385101 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:51:58.387495 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:51:58.390094 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task 7f26a003-a3ed-4734-889e-bba26358657b is in state STARTED 2026-04-05 00:51:58.392805 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:51:58.392856 | orchestrator | 2026-04-05 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:01.459878 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:52:01.460684 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:01.462378 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:01.463613 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:01.465544 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task 7f26a003-a3ed-4734-889e-bba26358657b is in state STARTED 2026-04-05 00:52:01.466882 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:01.467182 | orchestrator | 2026-04-05 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:04.636809 | orchestrator | 2026-04-05 00:52:04 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:52:04.637770 | orchestrator | 2026-04-05 00:52:04 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:04.638816 | orchestrator | 2026-04-05 00:52:04 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:04.641674 | orchestrator | 2026-04-05 00:52:04 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:04.641726 | orchestrator | 2026-04-05 00:52:04 | INFO  | Task 7f26a003-a3ed-4734-889e-bba26358657b is in state STARTED 2026-04-05 00:52:04.644247 | orchestrator | 2026-04-05 00:52:04 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:04.645125 | orchestrator | 2026-04-05 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:07.727723 | orchestrator | 2026-04-05 00:52:07 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:52:07.728496 | orchestrator | 2026-04-05 00:52:07 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:07.731993 | orchestrator | 2026-04-05 00:52:07 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:07.733505 | orchestrator | 2026-04-05 00:52:07 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:07.734695 | orchestrator | 2026-04-05 00:52:07 | INFO  | Task 7f26a003-a3ed-4734-889e-bba26358657b is in state STARTED 2026-04-05 00:52:07.735973 | orchestrator | 2026-04-05 00:52:07 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:07.736194 | orchestrator | 2026-04-05 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:10.864949 | orchestrator | 2026-04-05 00:52:10 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:52:10.865138 | orchestrator | 2026-04-05 00:52:10 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:10.865165 | orchestrator | 2026-04-05 00:52:10 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:10.865182 | orchestrator | 2026-04-05 00:52:10 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:10.865209 | orchestrator | 2026-04-05 00:52:10 | INFO  | Task 7f26a003-a3ed-4734-889e-bba26358657b is in state STARTED 2026-04-05 00:52:10.865220 | orchestrator | 2026-04-05 00:52:10 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:10.865233 | orchestrator | 2026-04-05 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:13.966478 | orchestrator | 2026-04-05 00:52:13.966579 | orchestrator | 2026-04-05 00:52:13.966598 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:52:13.966611 | orchestrator | 2026-04-05 00:52:13.966623 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:52:13.966634 | orchestrator | Sunday 05 April 2026 00:51:53 +0000 (0:00:00.511) 0:00:00.511 ********** 2026-04-05 00:52:13.966646 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:52:13.966659 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:52:13.966671 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:52:13.966682 | orchestrator | 2026-04-05 00:52:13.966693 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:52:13.966704 | orchestrator | Sunday 05 April 2026 00:51:54 +0000 (0:00:00.395) 0:00:00.906 ********** 2026-04-05 00:52:13.966715 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-05 00:52:13.966727 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-05 00:52:13.966738 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-05 00:52:13.966749 | orchestrator | 2026-04-05 00:52:13.966760 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-05 00:52:13.966771 | orchestrator | 2026-04-05 00:52:13.966782 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-05 00:52:13.966793 | orchestrator | Sunday 05 April 2026 00:51:54 +0000 (0:00:00.408) 0:00:01.315 ********** 2026-04-05 00:52:13.966804 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:52:13.966816 | orchestrator | 2026-04-05 00:52:13.966827 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-05 00:52:13.966838 | orchestrator | Sunday 05 April 2026 00:51:55 +0000 (0:00:00.849) 0:00:02.165 ********** 2026-04-05 00:52:13.966850 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-05 00:52:13.966877 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-05 00:52:13.966900 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-05 00:52:13.966911 | orchestrator | 2026-04-05 00:52:13.966922 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-05 00:52:13.966933 | orchestrator | Sunday 05 April 2026 00:51:57 +0000 (0:00:02.401) 0:00:04.566 ********** 2026-04-05 00:52:13.966946 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-05 00:52:13.966959 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-05 00:52:13.966972 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-05 00:52:13.966984 | orchestrator | 2026-04-05 00:52:13.966997 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-05 00:52:13.967009 | orchestrator | Sunday 05 April 2026 00:52:01 +0000 (0:00:03.431) 0:00:07.998 ********** 2026-04-05 00:52:13.967028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:52:13.967072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:52:13.967121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:52:13.967137 | orchestrator | 2026-04-05 00:52:13.967150 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-05 00:52:13.967163 | orchestrator | Sunday 05 April 2026 00:52:03 +0000 (0:00:02.319) 0:00:10.317 ********** 2026-04-05 00:52:13.967176 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:52:13.967187 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:52:13.967198 | orchestrator | } 2026-04-05 00:52:13.967209 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:52:13.967219 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:52:13.967230 | orchestrator | } 2026-04-05 00:52:13.967240 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:52:13.967251 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:52:13.967261 | orchestrator | } 2026-04-05 00:52:13.967272 | orchestrator | 2026-04-05 00:52:13.967282 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:52:13.967293 | orchestrator | Sunday 05 April 2026 00:52:04 +0000 (0:00:00.692) 0:00:11.010 ********** 2026-04-05 00:52:13.967305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:52:13.967316 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:52:13.967327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:52:13.967351 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:52:13.967363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:52:13.967375 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:52:13.967385 | orchestrator | 2026-04-05 00:52:13.967396 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-05 00:52:13.967407 | orchestrator | Sunday 05 April 2026 00:52:06 +0000 (0:00:02.064) 0:00:13.075 ********** 2026-04-05 00:52:13.967439 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:52:13.967450 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:52:13.967461 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:52:13.967472 | orchestrator | 2026-04-05 00:52:13.967483 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:52:13.967496 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:52:13.967508 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:52:13.967525 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:52:13.967536 | orchestrator | 2026-04-05 00:52:13.967547 | orchestrator | 2026-04-05 00:52:13.967558 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:52:13.967569 | orchestrator | Sunday 05 April 2026 00:52:10 +0000 (0:00:04.459) 0:00:17.534 ********** 2026-04-05 00:52:13.967588 | orchestrator | =============================================================================== 2026-04-05 00:52:13.967599 | orchestrator | memcached : Restart memcached container --------------------------------- 4.46s 2026-04-05 00:52:13.967610 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.43s 2026-04-05 00:52:13.967620 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.40s 2026-04-05 00:52:13.967631 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.32s 2026-04-05 00:52:13.967641 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.06s 2026-04-05 00:52:13.967652 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.85s 2026-04-05 00:52:13.967662 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.69s 2026-04-05 00:52:13.967673 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-04-05 00:52:13.967694 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2026-04-05 00:52:13.967705 | orchestrator | 2026-04-05 00:52:13 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:52:13.967716 | orchestrator | 2026-04-05 00:52:13 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:13.967727 | orchestrator | 2026-04-05 00:52:13 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:13.967738 | orchestrator | 2026-04-05 00:52:13 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:13.967749 | orchestrator | 2026-04-05 00:52:13 | INFO  | Task 7f26a003-a3ed-4734-889e-bba26358657b is in state SUCCESS 2026-04-05 00:52:13.967760 | orchestrator | 2026-04-05 00:52:13 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:13.967770 | orchestrator | 2026-04-05 00:52:13 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:13.967781 | orchestrator | 2026-04-05 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:17.017888 | orchestrator | 2026-04-05 00:52:17 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:52:17.018350 | orchestrator | 2026-04-05 00:52:17 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:17.019449 | orchestrator | 2026-04-05 00:52:17 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:17.023885 | orchestrator | 2026-04-05 00:52:17 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:17.024370 | orchestrator | 2026-04-05 00:52:17 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:17.025595 | orchestrator | 2026-04-05 00:52:17 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:17.025671 | orchestrator | 2026-04-05 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:20.405775 | orchestrator | 2026-04-05 00:52:20 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:52:20.405868 | orchestrator | 2026-04-05 00:52:20 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:20.405881 | orchestrator | 2026-04-05 00:52:20 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:20.405888 | orchestrator | 2026-04-05 00:52:20 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:20.405894 | orchestrator | 2026-04-05 00:52:20 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:20.405901 | orchestrator | 2026-04-05 00:52:20 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:20.405908 | orchestrator | 2026-04-05 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:23.528702 | orchestrator | 2026-04-05 00:52:23 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:52:23.529244 | orchestrator | 2026-04-05 00:52:23 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:23.533969 | orchestrator | 2026-04-05 00:52:23 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:23.534602 | orchestrator | 2026-04-05 00:52:23 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:23.535148 | orchestrator | 2026-04-05 00:52:23 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:23.536360 | orchestrator | 2026-04-05 00:52:23 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:23.536460 | orchestrator | 2026-04-05 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:26.611197 | orchestrator | 2026-04-05 00:52:26 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:52:26.611300 | orchestrator | 2026-04-05 00:52:26 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:26.611714 | orchestrator | 2026-04-05 00:52:26 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:26.612676 | orchestrator | 2026-04-05 00:52:26 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:26.615828 | orchestrator | 2026-04-05 00:52:26 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:26.615899 | orchestrator | 2026-04-05 00:52:26 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:26.615912 | orchestrator | 2026-04-05 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:29.728799 | orchestrator | 2026-04-05 00:52:29 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:52:29.728890 | orchestrator | 2026-04-05 00:52:29 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:29.728900 | orchestrator | 2026-04-05 00:52:29 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:29.728907 | orchestrator | 2026-04-05 00:52:29 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:29.728914 | orchestrator | 2026-04-05 00:52:29 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:29.728922 | orchestrator | 2026-04-05 00:52:29 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:29.728929 | orchestrator | 2026-04-05 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:32.973843 | orchestrator | 2026-04-05 00:52:32 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state STARTED 2026-04-05 00:52:32.973931 | orchestrator | 2026-04-05 00:52:32 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:32.973942 | orchestrator | 2026-04-05 00:52:32 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:32.973949 | orchestrator | 2026-04-05 00:52:32 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:32.973955 | orchestrator | 2026-04-05 00:52:32 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:32.973979 | orchestrator | 2026-04-05 00:52:32 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:32.973988 | orchestrator | 2026-04-05 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:35.931397 | orchestrator | 2026-04-05 00:52:35.931545 | orchestrator | 2026-04-05 00:52:35 | INFO  | Task fe85166c-1852-4f5f-a1f7-3044d487d4ba is in state SUCCESS 2026-04-05 00:52:35.932038 | orchestrator | 2026-04-05 00:52:35.932066 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:52:35.932077 | orchestrator | 2026-04-05 00:52:35.932087 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:52:35.932097 | orchestrator | Sunday 05 April 2026 00:51:52 +0000 (0:00:00.649) 0:00:00.649 ********** 2026-04-05 00:52:35.932107 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:52:35.932118 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:52:35.932128 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:52:35.932138 | orchestrator | 2026-04-05 00:52:35.932147 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:52:35.932181 | orchestrator | Sunday 05 April 2026 00:51:53 +0000 (0:00:00.473) 0:00:01.123 ********** 2026-04-05 00:52:35.932192 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-05 00:52:35.932202 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-05 00:52:35.932211 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-05 00:52:35.932221 | orchestrator | 2026-04-05 00:52:35.932231 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-05 00:52:35.932240 | orchestrator | 2026-04-05 00:52:35.932250 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-05 00:52:35.932259 | orchestrator | Sunday 05 April 2026 00:51:53 +0000 (0:00:00.534) 0:00:01.657 ********** 2026-04-05 00:52:35.932268 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:52:35.932279 | orchestrator | 2026-04-05 00:52:35.932289 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-05 00:52:35.932313 | orchestrator | Sunday 05 April 2026 00:51:55 +0000 (0:00:01.177) 0:00:02.834 ********** 2026-04-05 00:52:35.932326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932439 | orchestrator | 2026-04-05 00:52:35.932449 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-05 00:52:35.932459 | orchestrator | Sunday 05 April 2026 00:51:57 +0000 (0:00:02.718) 0:00:05.553 ********** 2026-04-05 00:52:35.932475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932551 | orchestrator | 2026-04-05 00:52:35.932561 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-05 00:52:35.932571 | orchestrator | Sunday 05 April 2026 00:52:02 +0000 (0:00:04.683) 0:00:10.236 ********** 2026-04-05 00:52:35.932587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932664 | orchestrator | 2026-04-05 00:52:35.932675 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-05 00:52:35.932686 | orchestrator | Sunday 05 April 2026 00:52:07 +0000 (0:00:04.557) 0:00:14.794 ********** 2026-04-05 00:52:35.932698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:52:35.932800 | orchestrator | 2026-04-05 00:52:35.932817 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-05 00:52:35.932834 | orchestrator | Sunday 05 April 2026 00:52:10 +0000 (0:00:03.417) 0:00:18.211 ********** 2026-04-05 00:52:35.932851 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:52:35.932868 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:52:35.932885 | orchestrator | } 2026-04-05 00:52:35.932902 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:52:35.932920 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:52:35.932938 | orchestrator | } 2026-04-05 00:52:35.932956 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:52:35.932974 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:52:35.932992 | orchestrator | } 2026-04-05 00:52:35.933012 | orchestrator | 2026-04-05 00:52:35.933030 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:52:35.933043 | orchestrator | Sunday 05 April 2026 00:52:11 +0000 (0:00:01.151) 0:00:19.363 ********** 2026-04-05 00:52:35.933060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-05 00:52:35.933071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-05 00:52:35.933082 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:52:35.933092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-05 00:52:35.933110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-05 00:52:35.933121 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:52:35.933131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-05 00:52:35.933149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-05 00:52:35.933160 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:52:35.933169 | orchestrator | 2026-04-05 00:52:35.933179 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 00:52:35.933188 | orchestrator | Sunday 05 April 2026 00:52:13 +0000 (0:00:01.478) 0:00:20.841 ********** 2026-04-05 00:52:35.933198 | orchestrator | 2026-04-05 00:52:35.933207 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 00:52:35.933216 | orchestrator | Sunday 05 April 2026 00:52:13 +0000 (0:00:00.164) 0:00:21.006 ********** 2026-04-05 00:52:35.933226 | orchestrator | 2026-04-05 00:52:35.933235 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 00:52:35.933244 | orchestrator | Sunday 05 April 2026 00:52:13 +0000 (0:00:00.269) 0:00:21.276 ********** 2026-04-05 00:52:35.933254 | orchestrator | 2026-04-05 00:52:35.933263 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-05 00:52:35.933278 | orchestrator | Sunday 05 April 2026 00:52:13 +0000 (0:00:00.378) 0:00:21.654 ********** 2026-04-05 00:52:35.933287 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:52:35.933301 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:52:35.933317 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:52:35.933333 | orchestrator | 2026-04-05 00:52:35.933348 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-05 00:52:35.933365 | orchestrator | Sunday 05 April 2026 00:52:24 +0000 (0:00:10.699) 0:00:32.353 ********** 2026-04-05 00:52:35.933379 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:52:35.933394 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:52:35.933435 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:52:35.933451 | orchestrator | 2026-04-05 00:52:35.933467 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:52:35.933484 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:52:35.933512 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:52:35.933527 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:52:35.933541 | orchestrator | 2026-04-05 00:52:35.933556 | orchestrator | 2026-04-05 00:52:35.933572 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:52:35.933588 | orchestrator | Sunday 05 April 2026 00:52:35 +0000 (0:00:10.635) 0:00:42.989 ********** 2026-04-05 00:52:35.933603 | orchestrator | =============================================================================== 2026-04-05 00:52:35.933617 | orchestrator | redis : Restart redis container ---------------------------------------- 10.70s 2026-04-05 00:52:35.933632 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.64s 2026-04-05 00:52:35.933646 | orchestrator | redis : Copying over default config.json files -------------------------- 4.68s 2026-04-05 00:52:35.933661 | orchestrator | redis : Copying over redis config files --------------------------------- 4.56s 2026-04-05 00:52:35.933676 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.42s 2026-04-05 00:52:35.933692 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.72s 2026-04-05 00:52:35.933707 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.48s 2026-04-05 00:52:35.933722 | orchestrator | redis : include_tasks --------------------------------------------------- 1.18s 2026-04-05 00:52:35.933737 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.15s 2026-04-05 00:52:35.933753 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.82s 2026-04-05 00:52:35.933770 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-04-05 00:52:35.933786 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2026-04-05 00:52:35.934067 | orchestrator | 2026-04-05 00:52:35 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:35.937885 | orchestrator | 2026-04-05 00:52:35 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:35.940204 | orchestrator | 2026-04-05 00:52:35 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:35.943184 | orchestrator | 2026-04-05 00:52:35 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:35.945612 | orchestrator | 2026-04-05 00:52:35 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:35.945672 | orchestrator | 2026-04-05 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:39.000091 | orchestrator | 2026-04-05 00:52:38 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:39.000950 | orchestrator | 2026-04-05 00:52:38 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:39.002370 | orchestrator | 2026-04-05 00:52:38 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:39.007594 | orchestrator | 2026-04-05 00:52:39 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:39.009061 | orchestrator | 2026-04-05 00:52:39 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:39.009136 | orchestrator | 2026-04-05 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:42.079749 | orchestrator | 2026-04-05 00:52:42 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:42.080763 | orchestrator | 2026-04-05 00:52:42 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:42.085569 | orchestrator | 2026-04-05 00:52:42 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:42.086727 | orchestrator | 2026-04-05 00:52:42 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:42.087538 | orchestrator | 2026-04-05 00:52:42 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:42.087588 | orchestrator | 2026-04-05 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:45.167582 | orchestrator | 2026-04-05 00:52:45 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:45.168513 | orchestrator | 2026-04-05 00:52:45 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:45.169851 | orchestrator | 2026-04-05 00:52:45 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:45.171186 | orchestrator | 2026-04-05 00:52:45 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:45.174178 | orchestrator | 2026-04-05 00:52:45 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:45.174229 | orchestrator | 2026-04-05 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:48.221242 | orchestrator | 2026-04-05 00:52:48 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:48.221504 | orchestrator | 2026-04-05 00:52:48 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:48.222544 | orchestrator | 2026-04-05 00:52:48 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:48.223331 | orchestrator | 2026-04-05 00:52:48 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:48.224350 | orchestrator | 2026-04-05 00:52:48 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:48.224443 | orchestrator | 2026-04-05 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:51.313204 | orchestrator | 2026-04-05 00:52:51 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:51.315227 | orchestrator | 2026-04-05 00:52:51 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:51.316138 | orchestrator | 2026-04-05 00:52:51 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:51.317067 | orchestrator | 2026-04-05 00:52:51 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:51.320335 | orchestrator | 2026-04-05 00:52:51 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:51.320378 | orchestrator | 2026-04-05 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:54.358511 | orchestrator | 2026-04-05 00:52:54 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:54.359562 | orchestrator | 2026-04-05 00:52:54 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:54.360478 | orchestrator | 2026-04-05 00:52:54 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:54.363395 | orchestrator | 2026-04-05 00:52:54 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:54.365792 | orchestrator | 2026-04-05 00:52:54 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:54.365873 | orchestrator | 2026-04-05 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:57.446955 | orchestrator | 2026-04-05 00:52:57 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:52:57.447029 | orchestrator | 2026-04-05 00:52:57 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:52:57.449117 | orchestrator | 2026-04-05 00:52:57 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:52:57.453116 | orchestrator | 2026-04-05 00:52:57 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:52:57.453172 | orchestrator | 2026-04-05 00:52:57 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:52:57.453180 | orchestrator | 2026-04-05 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:00.492872 | orchestrator | 2026-04-05 00:53:00 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:00.497796 | orchestrator | 2026-04-05 00:53:00 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:53:00.500023 | orchestrator | 2026-04-05 00:53:00 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:00.503762 | orchestrator | 2026-04-05 00:53:00 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:00.507762 | orchestrator | 2026-04-05 00:53:00 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:00.507914 | orchestrator | 2026-04-05 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:03.566883 | orchestrator | 2026-04-05 00:53:03 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:03.567547 | orchestrator | 2026-04-05 00:53:03 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:53:03.568516 | orchestrator | 2026-04-05 00:53:03 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:03.569573 | orchestrator | 2026-04-05 00:53:03 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:03.570654 | orchestrator | 2026-04-05 00:53:03 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:03.570694 | orchestrator | 2026-04-05 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:06.610469 | orchestrator | 2026-04-05 00:53:06 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:06.611035 | orchestrator | 2026-04-05 00:53:06 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:53:06.613060 | orchestrator | 2026-04-05 00:53:06 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:06.614430 | orchestrator | 2026-04-05 00:53:06 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:06.615520 | orchestrator | 2026-04-05 00:53:06 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:06.615556 | orchestrator | 2026-04-05 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:09.656861 | orchestrator | 2026-04-05 00:53:09 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:09.657660 | orchestrator | 2026-04-05 00:53:09 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:53:09.658703 | orchestrator | 2026-04-05 00:53:09 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:09.660173 | orchestrator | 2026-04-05 00:53:09 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:09.662182 | orchestrator | 2026-04-05 00:53:09 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:09.662224 | orchestrator | 2026-04-05 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:12.698603 | orchestrator | 2026-04-05 00:53:12 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:12.699694 | orchestrator | 2026-04-05 00:53:12 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state STARTED 2026-04-05 00:53:12.701020 | orchestrator | 2026-04-05 00:53:12 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:12.702010 | orchestrator | 2026-04-05 00:53:12 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:12.703096 | orchestrator | 2026-04-05 00:53:12 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:12.703143 | orchestrator | 2026-04-05 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:15.748015 | orchestrator | 2026-04-05 00:53:15 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:15.748629 | orchestrator | 2026-04-05 00:53:15 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:15.752536 | orchestrator | 2026-04-05 00:53:15.752600 | orchestrator | 2026-04-05 00:53:15.752624 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:53:15.752641 | orchestrator | 2026-04-05 00:53:15.752652 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:53:15.752664 | orchestrator | Sunday 05 April 2026 00:51:53 +0000 (0:00:00.550) 0:00:00.550 ********** 2026-04-05 00:53:15.752676 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:15.752688 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:15.752699 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:15.752710 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:53:15.752721 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:53:15.752732 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:53:15.752743 | orchestrator | 2026-04-05 00:53:15.752754 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:53:15.752765 | orchestrator | Sunday 05 April 2026 00:51:54 +0000 (0:00:00.866) 0:00:01.417 ********** 2026-04-05 00:53:15.752776 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:53:15.752791 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:53:15.752831 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:53:15.752851 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:53:15.752869 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:53:15.752886 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:53:15.752903 | orchestrator | 2026-04-05 00:53:15.752920 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-05 00:53:15.752967 | orchestrator | 2026-04-05 00:53:15.752986 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-05 00:53:15.753004 | orchestrator | Sunday 05 April 2026 00:51:56 +0000 (0:00:02.099) 0:00:03.517 ********** 2026-04-05 00:53:15.753026 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:53:15.753044 | orchestrator | 2026-04-05 00:53:15.753063 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 00:53:15.753083 | orchestrator | Sunday 05 April 2026 00:51:58 +0000 (0:00:02.540) 0:00:06.057 ********** 2026-04-05 00:53:15.753132 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-05 00:53:15.753151 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-05 00:53:15.753171 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-05 00:53:15.753188 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-05 00:53:15.753205 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-05 00:53:15.753222 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-05 00:53:15.753239 | orchestrator | 2026-04-05 00:53:15.753258 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 00:53:15.753276 | orchestrator | Sunday 05 April 2026 00:52:01 +0000 (0:00:02.507) 0:00:08.564 ********** 2026-04-05 00:53:15.753296 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-05 00:53:15.753316 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-05 00:53:15.753336 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-05 00:53:15.753355 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-05 00:53:15.753373 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-05 00:53:15.753392 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-05 00:53:15.753441 | orchestrator | 2026-04-05 00:53:15.753460 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 00:53:15.753477 | orchestrator | Sunday 05 April 2026 00:52:04 +0000 (0:00:03.281) 0:00:11.846 ********** 2026-04-05 00:53:15.753495 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-05 00:53:15.753515 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:15.753535 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-05 00:53:15.753553 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:15.753571 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-05 00:53:15.753591 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:15.753610 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-05 00:53:15.753631 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:15.753650 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-05 00:53:15.753671 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:15.753691 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-05 00:53:15.753711 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:15.753730 | orchestrator | 2026-04-05 00:53:15.753750 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-05 00:53:15.753769 | orchestrator | Sunday 05 April 2026 00:52:07 +0000 (0:00:03.013) 0:00:14.859 ********** 2026-04-05 00:53:15.753788 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:15.753808 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:15.753828 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:15.753848 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:15.753868 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:15.753888 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:15.753908 | orchestrator | 2026-04-05 00:53:15.753928 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-05 00:53:15.753947 | orchestrator | Sunday 05 April 2026 00:52:09 +0000 (0:00:01.974) 0:00:16.834 ********** 2026-04-05 00:53:15.753997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754245 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754305 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754506 | orchestrator | 2026-04-05 00:53:15.754525 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-05 00:53:15.754557 | orchestrator | Sunday 05 April 2026 00:52:12 +0000 (0:00:02.788) 0:00:19.623 ********** 2026-04-05 00:53:15.754583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.754912 | orchestrator | 2026-04-05 00:53:15.754931 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-05 00:53:15.754950 | orchestrator | Sunday 05 April 2026 00:52:18 +0000 (0:00:05.588) 0:00:25.212 ********** 2026-04-05 00:53:15.754969 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:15.754987 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:15.755005 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:15.755023 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:15.755041 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:15.755060 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:15.755079 | orchestrator | 2026-04-05 00:53:15.755106 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-05 00:53:15.755125 | orchestrator | Sunday 05 April 2026 00:52:19 +0000 (0:00:01.204) 0:00:26.416 ********** 2026-04-05 00:53:15.755145 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755252 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755305 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:53:15.755584 | orchestrator | 2026-04-05 00:53:15.755602 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-05 00:53:15.755621 | orchestrator | Sunday 05 April 2026 00:52:23 +0000 (0:00:04.126) 0:00:30.542 ********** 2026-04-05 00:53:15.755639 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:53:15.755656 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:53:15.755675 | orchestrator | } 2026-04-05 00:53:15.755695 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:53:15.755714 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:53:15.755733 | orchestrator | } 2026-04-05 00:53:15.755751 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:53:15.755771 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:53:15.755790 | orchestrator | } 2026-04-05 00:53:15.755810 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 00:53:15.755828 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:53:15.755847 | orchestrator | } 2026-04-05 00:53:15.755866 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 00:53:15.755880 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:53:15.755891 | orchestrator | } 2026-04-05 00:53:15.755902 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 00:53:15.755913 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:53:15.755924 | orchestrator | } 2026-04-05 00:53:15.755935 | orchestrator | 2026-04-05 00:53:15.755946 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:53:15.755957 | orchestrator | Sunday 05 April 2026 00:52:24 +0000 (0:00:00.948) 0:00:31.491 ********** 2026-04-05 00:53:15.755968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:53:15.755995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:53:15.756007 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:15.756030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:2026-04-05 00:53:15 | INFO  | Task bc9d1f0d-ce1f-4205-a18d-458bfe282f62 is in state SUCCESS 2026-04-05 00:53:15.756043 | orchestrator | /var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:53:15.756062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:53:15.756072 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:15.756082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:53:15.756093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:53:15.756110 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:15.756120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:53:15.756130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:53:15.756140 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:15.756158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:53:15.756173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:53:15.756183 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:15.756193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:53:15.756210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:53:15.756221 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:15.756231 | orchestrator | 2026-04-05 00:53:15.756241 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:53:15.756251 | orchestrator | Sunday 05 April 2026 00:52:27 +0000 (0:00:03.287) 0:00:34.779 ********** 2026-04-05 00:53:15.756261 | orchestrator | 2026-04-05 00:53:15.756271 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:53:15.756281 | orchestrator | Sunday 05 April 2026 00:52:28 +0000 (0:00:01.246) 0:00:36.026 ********** 2026-04-05 00:53:15.756290 | orchestrator | 2026-04-05 00:53:15.756300 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:53:15.756310 | orchestrator | Sunday 05 April 2026 00:52:29 +0000 (0:00:00.302) 0:00:36.329 ********** 2026-04-05 00:53:15.756319 | orchestrator | 2026-04-05 00:53:15.756329 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:53:15.756338 | orchestrator | Sunday 05 April 2026 00:52:29 +0000 (0:00:00.227) 0:00:36.556 ********** 2026-04-05 00:53:15.756348 | orchestrator | 2026-04-05 00:53:15.756357 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:53:15.756368 | orchestrator | Sunday 05 April 2026 00:52:29 +0000 (0:00:00.182) 0:00:36.739 ********** 2026-04-05 00:53:15.756377 | orchestrator | 2026-04-05 00:53:15.756386 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:53:15.756396 | orchestrator | Sunday 05 April 2026 00:52:29 +0000 (0:00:00.144) 0:00:36.883 ********** 2026-04-05 00:53:15.756437 | orchestrator | 2026-04-05 00:53:15.756447 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-05 00:53:15.756457 | orchestrator | Sunday 05 April 2026 00:52:29 +0000 (0:00:00.165) 0:00:37.049 ********** 2026-04-05 00:53:15.756466 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:15.756476 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:15.756486 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:15.756495 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:15.756505 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:15.756514 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:15.756524 | orchestrator | 2026-04-05 00:53:15.756540 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-05 00:53:15.756550 | orchestrator | Sunday 05 April 2026 00:52:36 +0000 (0:00:06.457) 0:00:43.507 ********** 2026-04-05 00:53:15.756560 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:15.756570 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:15.756580 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:15.756590 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:53:15.756599 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:53:15.756609 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:53:15.756618 | orchestrator | 2026-04-05 00:53:15.756628 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-05 00:53:15.756637 | orchestrator | Sunday 05 April 2026 00:52:38 +0000 (0:00:01.994) 0:00:45.502 ********** 2026-04-05 00:53:15.756647 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:15.756657 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:15.756666 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:15.756684 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:15.756693 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:15.756703 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:15.756712 | orchestrator | 2026-04-05 00:53:15.756727 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-05 00:53:15.756737 | orchestrator | Sunday 05 April 2026 00:52:47 +0000 (0:00:09.695) 0:00:55.197 ********** 2026-04-05 00:53:15.756746 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-05 00:53:15.756756 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-05 00:53:15.756766 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-05 00:53:15.756776 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-05 00:53:15.756785 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-05 00:53:15.756796 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-05 00:53:15.756805 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-05 00:53:15.756814 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-05 00:53:15.756824 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-05 00:53:15.756834 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-05 00:53:15.756843 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-05 00:53:15.756853 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-05 00:53:15.756862 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:53:15.756872 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:53:15.756881 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:53:15.756891 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:53:15.756900 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:53:15.756910 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:53:15.756919 | orchestrator | 2026-04-05 00:53:15.756929 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-05 00:53:15.756938 | orchestrator | Sunday 05 April 2026 00:52:56 +0000 (0:00:08.252) 0:01:03.449 ********** 2026-04-05 00:53:15.756948 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-05 00:53:15.756958 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:15.756967 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-05 00:53:15.756977 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:15.756987 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-05 00:53:15.756997 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:15.757006 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-05 00:53:15.757016 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-05 00:53:15.757026 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-05 00:53:15.757042 | orchestrator | 2026-04-05 00:53:15.757051 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-05 00:53:15.757061 | orchestrator | Sunday 05 April 2026 00:52:58 +0000 (0:00:02.411) 0:01:05.860 ********** 2026-04-05 00:53:15.757070 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-05 00:53:15.757080 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:15.757089 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-05 00:53:15.757099 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:15.757116 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-05 00:53:15.757126 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:15.757135 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-05 00:53:15.757145 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-05 00:53:15.757155 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-05 00:53:15.757164 | orchestrator | 2026-04-05 00:53:15.757174 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-05 00:53:15.757183 | orchestrator | Sunday 05 April 2026 00:53:02 +0000 (0:00:04.311) 0:01:10.172 ********** 2026-04-05 00:53:15.757193 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:15.757203 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:15.757212 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:15.757222 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:15.757231 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:15.757241 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:15.757250 | orchestrator | 2026-04-05 00:53:15.757260 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:53:15.757278 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:53:15.757289 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:53:15.757299 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:53:15.757308 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 00:53:15.757318 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 00:53:15.757327 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 00:53:15.757337 | orchestrator | 2026-04-05 00:53:15.757347 | orchestrator | 2026-04-05 00:53:15.757357 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:53:15.757367 | orchestrator | Sunday 05 April 2026 00:53:12 +0000 (0:00:09.547) 0:01:19.720 ********** 2026-04-05 00:53:15.757376 | orchestrator | =============================================================================== 2026-04-05 00:53:15.757386 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.24s 2026-04-05 00:53:15.757395 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.25s 2026-04-05 00:53:15.757437 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 6.46s 2026-04-05 00:53:15.757453 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.59s 2026-04-05 00:53:15.757463 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.31s 2026-04-05 00:53:15.757472 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 4.13s 2026-04-05 00:53:15.757482 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.29s 2026-04-05 00:53:15.757499 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.28s 2026-04-05 00:53:15.757508 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.01s 2026-04-05 00:53:15.757518 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.79s 2026-04-05 00:53:15.757527 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.54s 2026-04-05 00:53:15.757537 | orchestrator | module-load : Load modules ---------------------------------------------- 2.51s 2026-04-05 00:53:15.757546 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.41s 2026-04-05 00:53:15.757556 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.27s 2026-04-05 00:53:15.757565 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.10s 2026-04-05 00:53:15.757575 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.99s 2026-04-05 00:53:15.757585 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.97s 2026-04-05 00:53:15.757594 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.20s 2026-04-05 00:53:15.757604 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.95s 2026-04-05 00:53:15.757613 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2026-04-05 00:53:15.757623 | orchestrator | 2026-04-05 00:53:15 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:15.757633 | orchestrator | 2026-04-05 00:53:15 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:15.757643 | orchestrator | 2026-04-05 00:53:15 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:15.757653 | orchestrator | 2026-04-05 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:18.785946 | orchestrator | 2026-04-05 00:53:18 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:18.787292 | orchestrator | 2026-04-05 00:53:18 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:18.789440 | orchestrator | 2026-04-05 00:53:18 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:18.793703 | orchestrator | 2026-04-05 00:53:18 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:18.793759 | orchestrator | 2026-04-05 00:53:18 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:18.793768 | orchestrator | 2026-04-05 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:21.832629 | orchestrator | 2026-04-05 00:53:21 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:21.833385 | orchestrator | 2026-04-05 00:53:21 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:21.835142 | orchestrator | 2026-04-05 00:53:21 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:21.836263 | orchestrator | 2026-04-05 00:53:21 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:21.838958 | orchestrator | 2026-04-05 00:53:21 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:21.839216 | orchestrator | 2026-04-05 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:24.890347 | orchestrator | 2026-04-05 00:53:24 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:24.891226 | orchestrator | 2026-04-05 00:53:24 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:24.893158 | orchestrator | 2026-04-05 00:53:24 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:24.894180 | orchestrator | 2026-04-05 00:53:24 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:24.895000 | orchestrator | 2026-04-05 00:53:24 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:24.895695 | orchestrator | 2026-04-05 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:27.932388 | orchestrator | 2026-04-05 00:53:27 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:27.934348 | orchestrator | 2026-04-05 00:53:27 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:27.935187 | orchestrator | 2026-04-05 00:53:27 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:27.935932 | orchestrator | 2026-04-05 00:53:27 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:27.939153 | orchestrator | 2026-04-05 00:53:27 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:27.939200 | orchestrator | 2026-04-05 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:31.011985 | orchestrator | 2026-04-05 00:53:31 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:31.012660 | orchestrator | 2026-04-05 00:53:31 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:31.015101 | orchestrator | 2026-04-05 00:53:31 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:31.017901 | orchestrator | 2026-04-05 00:53:31 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:31.021291 | orchestrator | 2026-04-05 00:53:31 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:31.021342 | orchestrator | 2026-04-05 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:34.099657 | orchestrator | 2026-04-05 00:53:34 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:34.101680 | orchestrator | 2026-04-05 00:53:34 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:34.104678 | orchestrator | 2026-04-05 00:53:34 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:34.107422 | orchestrator | 2026-04-05 00:53:34 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:34.109633 | orchestrator | 2026-04-05 00:53:34 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:34.109779 | orchestrator | 2026-04-05 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:37.155931 | orchestrator | 2026-04-05 00:53:37 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:37.159907 | orchestrator | 2026-04-05 00:53:37 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:37.161034 | orchestrator | 2026-04-05 00:53:37 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:37.161837 | orchestrator | 2026-04-05 00:53:37 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:37.163061 | orchestrator | 2026-04-05 00:53:37 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:37.163138 | orchestrator | 2026-04-05 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:40.280675 | orchestrator | 2026-04-05 00:53:40 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:40.280843 | orchestrator | 2026-04-05 00:53:40 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:40.280862 | orchestrator | 2026-04-05 00:53:40 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:40.280873 | orchestrator | 2026-04-05 00:53:40 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:40.280884 | orchestrator | 2026-04-05 00:53:40 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:40.280895 | orchestrator | 2026-04-05 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:43.338916 | orchestrator | 2026-04-05 00:53:43 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:43.341326 | orchestrator | 2026-04-05 00:53:43 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:43.343417 | orchestrator | 2026-04-05 00:53:43 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:43.344089 | orchestrator | 2026-04-05 00:53:43 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:43.346907 | orchestrator | 2026-04-05 00:53:43 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:43.347022 | orchestrator | 2026-04-05 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:46.396934 | orchestrator | 2026-04-05 00:53:46 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:46.400075 | orchestrator | 2026-04-05 00:53:46 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:46.400749 | orchestrator | 2026-04-05 00:53:46 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:46.402256 | orchestrator | 2026-04-05 00:53:46 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:46.402515 | orchestrator | 2026-04-05 00:53:46 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:46.402715 | orchestrator | 2026-04-05 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:49.442664 | orchestrator | 2026-04-05 00:53:49 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:49.444719 | orchestrator | 2026-04-05 00:53:49 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:49.445651 | orchestrator | 2026-04-05 00:53:49 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:49.446901 | orchestrator | 2026-04-05 00:53:49 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:49.448265 | orchestrator | 2026-04-05 00:53:49 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:49.448302 | orchestrator | 2026-04-05 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:52.701786 | orchestrator | 2026-04-05 00:53:52 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:52.702591 | orchestrator | 2026-04-05 00:53:52 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:52.704824 | orchestrator | 2026-04-05 00:53:52 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:52.707629 | orchestrator | 2026-04-05 00:53:52 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:52.708775 | orchestrator | 2026-04-05 00:53:52 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:52.708838 | orchestrator | 2026-04-05 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:55.795783 | orchestrator | 2026-04-05 00:53:55 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:55.802297 | orchestrator | 2026-04-05 00:53:55 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:55.806183 | orchestrator | 2026-04-05 00:53:55 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:55.807229 | orchestrator | 2026-04-05 00:53:55 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:55.809000 | orchestrator | 2026-04-05 00:53:55 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:55.809035 | orchestrator | 2026-04-05 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:58.891499 | orchestrator | 2026-04-05 00:53:58 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:53:58.891585 | orchestrator | 2026-04-05 00:53:58 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:53:58.891600 | orchestrator | 2026-04-05 00:53:58 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:53:58.891613 | orchestrator | 2026-04-05 00:53:58 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:53:58.891625 | orchestrator | 2026-04-05 00:53:58 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:53:58.891637 | orchestrator | 2026-04-05 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:02.070509 | orchestrator | 2026-04-05 00:54:02 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state STARTED 2026-04-05 00:54:02.070590 | orchestrator | 2026-04-05 00:54:02 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:02.070607 | orchestrator | 2026-04-05 00:54:02 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:02.070620 | orchestrator | 2026-04-05 00:54:02 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:02.070633 | orchestrator | 2026-04-05 00:54:02 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:02.070645 | orchestrator | 2026-04-05 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:05.098457 | orchestrator | 2026-04-05 00:54:05 | INFO  | Task dd1510c4-c2d0-41b1-a669-d61df898e243 is in state SUCCESS 2026-04-05 00:54:05.098883 | orchestrator | 2026-04-05 00:54:05.098985 | orchestrator | 2026-04-05 00:54:05.099017 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-05 00:54:05.099031 | orchestrator | 2026-04-05 00:54:05.099042 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-05 00:54:05.099054 | orchestrator | Sunday 05 April 2026 00:48:46 +0000 (0:00:00.352) 0:00:00.352 ********** 2026-04-05 00:54:05.099065 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:54:05.099078 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:54:05.099088 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:54:05.099099 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.099110 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.099120 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.099131 | orchestrator | 2026-04-05 00:54:05.099142 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-05 00:54:05.099153 | orchestrator | Sunday 05 April 2026 00:48:47 +0000 (0:00:00.741) 0:00:01.093 ********** 2026-04-05 00:54:05.099164 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.099176 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.099211 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.099223 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.099234 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.099245 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.099256 | orchestrator | 2026-04-05 00:54:05.099267 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-05 00:54:05.099278 | orchestrator | Sunday 05 April 2026 00:48:48 +0000 (0:00:01.085) 0:00:02.179 ********** 2026-04-05 00:54:05.099289 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.099300 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.099311 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.099322 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.099332 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.099344 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.099387 | orchestrator | 2026-04-05 00:54:05.099404 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-05 00:54:05.099415 | orchestrator | Sunday 05 April 2026 00:48:49 +0000 (0:00:00.672) 0:00:02.852 ********** 2026-04-05 00:54:05.099426 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:54:05.099437 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:54:05.099448 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.099459 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.099470 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:54:05.099481 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.099492 | orchestrator | 2026-04-05 00:54:05.099503 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-05 00:54:05.099514 | orchestrator | Sunday 05 April 2026 00:48:52 +0000 (0:00:03.234) 0:00:06.086 ********** 2026-04-05 00:54:05.099525 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:54:05.099536 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:54:05.099547 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.099557 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.099569 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.099580 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:54:05.099591 | orchestrator | 2026-04-05 00:54:05.099602 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-05 00:54:05.099614 | orchestrator | Sunday 05 April 2026 00:48:54 +0000 (0:00:01.641) 0:00:07.727 ********** 2026-04-05 00:54:05.099625 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:54:05.099636 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:54:05.099647 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:54:05.099658 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.099669 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.099681 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.099692 | orchestrator | 2026-04-05 00:54:05.099703 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-05 00:54:05.099714 | orchestrator | Sunday 05 April 2026 00:48:55 +0000 (0:00:01.373) 0:00:09.101 ********** 2026-04-05 00:54:05.099726 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.099737 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.099749 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.099760 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.099771 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.099782 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.099793 | orchestrator | 2026-04-05 00:54:05.099804 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-05 00:54:05.099815 | orchestrator | Sunday 05 April 2026 00:48:57 +0000 (0:00:01.702) 0:00:10.803 ********** 2026-04-05 00:54:05.099827 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.099838 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.099849 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.099860 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.099871 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.099892 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.099904 | orchestrator | 2026-04-05 00:54:05.099915 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-05 00:54:05.099927 | orchestrator | Sunday 05 April 2026 00:48:58 +0000 (0:00:00.923) 0:00:11.727 ********** 2026-04-05 00:54:05.099938 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:54:05.099949 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:54:05.099960 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.099972 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:54:05.099983 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:54:05.099994 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.100005 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:54:05.100017 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:54:05.100029 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.100040 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:54:05.100069 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:54:05.100089 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.100101 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:54:05.100112 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:54:05.100125 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.100136 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:54:05.100147 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:54:05.100158 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.100169 | orchestrator | 2026-04-05 00:54:05.100180 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-05 00:54:05.100192 | orchestrator | Sunday 05 April 2026 00:48:59 +0000 (0:00:01.633) 0:00:13.361 ********** 2026-04-05 00:54:05.100203 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.100214 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.100225 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.100236 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.100247 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.100259 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.100270 | orchestrator | 2026-04-05 00:54:05.100282 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-05 00:54:05.100294 | orchestrator | Sunday 05 April 2026 00:49:01 +0000 (0:00:02.169) 0:00:15.530 ********** 2026-04-05 00:54:05.100306 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:54:05.100317 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:54:05.100328 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:54:05.100339 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.100351 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.100388 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.100399 | orchestrator | 2026-04-05 00:54:05.100410 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-05 00:54:05.100422 | orchestrator | Sunday 05 April 2026 00:49:03 +0000 (0:00:01.251) 0:00:16.782 ********** 2026-04-05 00:54:05.100433 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:54:05.100444 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.100455 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:54:05.100466 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.100477 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:54:05.100488 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.100499 | orchestrator | 2026-04-05 00:54:05.100511 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-05 00:54:05.100532 | orchestrator | Sunday 05 April 2026 00:49:09 +0000 (0:00:05.803) 0:00:22.586 ********** 2026-04-05 00:54:05.100543 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.100555 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.100567 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.100578 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.100590 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.100601 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.100612 | orchestrator | 2026-04-05 00:54:05.100623 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-05 00:54:05.100634 | orchestrator | Sunday 05 April 2026 00:49:10 +0000 (0:00:01.501) 0:00:24.087 ********** 2026-04-05 00:54:05.100645 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.100656 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.100667 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.100679 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.100691 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.100702 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.100714 | orchestrator | 2026-04-05 00:54:05.100726 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-05 00:54:05.100740 | orchestrator | Sunday 05 April 2026 00:49:14 +0000 (0:00:03.555) 0:00:27.643 ********** 2026-04-05 00:54:05.100751 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.100762 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.100774 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.100785 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.100796 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.100807 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.100819 | orchestrator | 2026-04-05 00:54:05.100830 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-05 00:54:05.100841 | orchestrator | Sunday 05 April 2026 00:49:16 +0000 (0:00:02.485) 0:00:30.128 ********** 2026-04-05 00:54:05.100853 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-05 00:54:05.100866 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-05 00:54:05.100878 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.100889 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-05 00:54:05.100900 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-05 00:54:05.100911 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.100922 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-05 00:54:05.100933 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-05 00:54:05.100944 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.100955 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-05 00:54:05.100966 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-05 00:54:05.100977 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.100989 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-05 00:54:05.101000 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-05 00:54:05.101011 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.101022 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-05 00:54:05.101032 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-05 00:54:05.101044 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.101056 | orchestrator | 2026-04-05 00:54:05.101067 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-05 00:54:05.101087 | orchestrator | Sunday 05 April 2026 00:49:18 +0000 (0:00:01.980) 0:00:32.109 ********** 2026-04-05 00:54:05.101106 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.101118 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.101129 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.101147 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.101159 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.101170 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.101181 | orchestrator | 2026-04-05 00:54:05.101192 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-05 00:54:05.101204 | orchestrator | Sunday 05 April 2026 00:49:20 +0000 (0:00:01.802) 0:00:33.912 ********** 2026-04-05 00:54:05.101215 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.101227 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.101238 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.101250 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.101261 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.101272 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.101283 | orchestrator | 2026-04-05 00:54:05.101294 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-05 00:54:05.101305 | orchestrator | 2026-04-05 00:54:05.101317 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-05 00:54:05.101328 | orchestrator | Sunday 05 April 2026 00:49:22 +0000 (0:00:02.191) 0:00:36.103 ********** 2026-04-05 00:54:05.101339 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.101350 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.101381 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.101394 | orchestrator | 2026-04-05 00:54:05.101406 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-05 00:54:05.101418 | orchestrator | Sunday 05 April 2026 00:49:24 +0000 (0:00:01.586) 0:00:37.689 ********** 2026-04-05 00:54:05.101429 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.101440 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.101451 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.101462 | orchestrator | 2026-04-05 00:54:05.101473 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-05 00:54:05.101484 | orchestrator | Sunday 05 April 2026 00:49:26 +0000 (0:00:02.035) 0:00:39.724 ********** 2026-04-05 00:54:05.101495 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.101506 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.101516 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.101527 | orchestrator | 2026-04-05 00:54:05.101538 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-05 00:54:05.101549 | orchestrator | Sunday 05 April 2026 00:49:28 +0000 (0:00:02.113) 0:00:41.838 ********** 2026-04-05 00:54:05.101560 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.101571 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.101582 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.101592 | orchestrator | 2026-04-05 00:54:05.101603 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-05 00:54:05.101615 | orchestrator | Sunday 05 April 2026 00:49:29 +0000 (0:00:01.005) 0:00:42.843 ********** 2026-04-05 00:54:05.101627 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.101638 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.101650 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.101661 | orchestrator | 2026-04-05 00:54:05.101672 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-05 00:54:05.101683 | orchestrator | Sunday 05 April 2026 00:49:29 +0000 (0:00:00.398) 0:00:43.242 ********** 2026-04-05 00:54:05.101694 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.101705 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.101715 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.101726 | orchestrator | 2026-04-05 00:54:05.101737 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-05 00:54:05.101748 | orchestrator | Sunday 05 April 2026 00:49:30 +0000 (0:00:01.047) 0:00:44.289 ********** 2026-04-05 00:54:05.101759 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.101770 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.101781 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.101800 | orchestrator | 2026-04-05 00:54:05.101811 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-05 00:54:05.101822 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:01.892) 0:00:46.182 ********** 2026-04-05 00:54:05.101833 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:54:05.101844 | orchestrator | 2026-04-05 00:54:05.101855 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-05 00:54:05.101867 | orchestrator | Sunday 05 April 2026 00:49:34 +0000 (0:00:01.801) 0:00:47.983 ********** 2026-04-05 00:54:05.101878 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.101890 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.101900 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.101911 | orchestrator | 2026-04-05 00:54:05.101923 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-05 00:54:05.101933 | orchestrator | Sunday 05 April 2026 00:49:38 +0000 (0:00:03.858) 0:00:51.842 ********** 2026-04-05 00:54:05.101944 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.101955 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.101966 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.101977 | orchestrator | 2026-04-05 00:54:05.101988 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-05 00:54:05.101999 | orchestrator | Sunday 05 April 2026 00:49:39 +0000 (0:00:01.072) 0:00:52.914 ********** 2026-04-05 00:54:05.102051 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.102066 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.102081 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.102091 | orchestrator | 2026-04-05 00:54:05.102103 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-05 00:54:05.102114 | orchestrator | Sunday 05 April 2026 00:49:41 +0000 (0:00:01.784) 0:00:54.699 ********** 2026-04-05 00:54:05.102126 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.102136 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.102148 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.102159 | orchestrator | 2026-04-05 00:54:05.102171 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-05 00:54:05.102200 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:00:02.893) 0:00:57.592 ********** 2026-04-05 00:54:05.102221 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.102240 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.102258 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.102278 | orchestrator | 2026-04-05 00:54:05.102297 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-05 00:54:05.102317 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:00:00.978) 0:00:58.571 ********** 2026-04-05 00:54:05.102337 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.102391 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.102416 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.102435 | orchestrator | 2026-04-05 00:54:05.102455 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-05 00:54:05.102475 | orchestrator | Sunday 05 April 2026 00:49:45 +0000 (0:00:00.834) 0:00:59.406 ********** 2026-04-05 00:54:05.102495 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.102507 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.102518 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.102529 | orchestrator | 2026-04-05 00:54:05.102540 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-05 00:54:05.102552 | orchestrator | Sunday 05 April 2026 00:49:50 +0000 (0:00:04.381) 0:01:03.789 ********** 2026-04-05 00:54:05.102563 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.102574 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.102585 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.102596 | orchestrator | 2026-04-05 00:54:05.102607 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-05 00:54:05.102630 | orchestrator | Sunday 05 April 2026 00:49:53 +0000 (0:00:03.134) 0:01:06.923 ********** 2026-04-05 00:54:05.102641 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.102652 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.102662 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.102673 | orchestrator | 2026-04-05 00:54:05.102684 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-05 00:54:05.102696 | orchestrator | Sunday 05 April 2026 00:49:54 +0000 (0:00:01.352) 0:01:08.276 ********** 2026-04-05 00:54:05.102707 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 00:54:05.102718 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 00:54:05.102729 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 00:54:05.102740 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 00:54:05.102751 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 00:54:05.102762 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 00:54:05.102773 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 00:54:05.102784 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 00:54:05.102795 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 00:54:05.102806 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 00:54:05.102817 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 00:54:05.102828 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 00:54:05.102838 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-05 00:54:05.102850 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-05 00:54:05.102861 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-05 00:54:05.102871 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.102882 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.102893 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.102904 | orchestrator | 2026-04-05 00:54:05.102916 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-05 00:54:05.102927 | orchestrator | Sunday 05 April 2026 00:50:48 +0000 (0:00:54.208) 0:02:02.485 ********** 2026-04-05 00:54:05.102938 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.102949 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.102960 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.102970 | orchestrator | 2026-04-05 00:54:05.102981 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-05 00:54:05.103002 | orchestrator | Sunday 05 April 2026 00:50:49 +0000 (0:00:00.660) 0:02:03.146 ********** 2026-04-05 00:54:05.103021 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.103039 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.103051 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.103061 | orchestrator | 2026-04-05 00:54:05.103072 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-05 00:54:05.103083 | orchestrator | Sunday 05 April 2026 00:50:50 +0000 (0:00:01.324) 0:02:04.470 ********** 2026-04-05 00:54:05.103094 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.103105 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.103117 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.103128 | orchestrator | 2026-04-05 00:54:05.103139 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-05 00:54:05.103150 | orchestrator | Sunday 05 April 2026 00:50:52 +0000 (0:00:01.444) 0:02:05.915 ********** 2026-04-05 00:54:05.103161 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.103172 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.103183 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.103193 | orchestrator | 2026-04-05 00:54:05.103204 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-05 00:54:05.103215 | orchestrator | Sunday 05 April 2026 00:51:18 +0000 (0:00:26.140) 0:02:32.055 ********** 2026-04-05 00:54:05.103226 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.103237 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.103248 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.103259 | orchestrator | 2026-04-05 00:54:05.103270 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-05 00:54:05.103281 | orchestrator | Sunday 05 April 2026 00:51:19 +0000 (0:00:00.804) 0:02:32.860 ********** 2026-04-05 00:54:05.103292 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.103303 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.103314 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.103325 | orchestrator | 2026-04-05 00:54:05.103336 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-05 00:54:05.103347 | orchestrator | Sunday 05 April 2026 00:51:20 +0000 (0:00:01.595) 0:02:34.456 ********** 2026-04-05 00:54:05.103381 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.103393 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.103404 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.103415 | orchestrator | 2026-04-05 00:54:05.103426 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-05 00:54:05.103437 | orchestrator | Sunday 05 April 2026 00:51:21 +0000 (0:00:00.735) 0:02:35.192 ********** 2026-04-05 00:54:05.103448 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.103459 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.103470 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.103481 | orchestrator | 2026-04-05 00:54:05.103491 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-05 00:54:05.103502 | orchestrator | Sunday 05 April 2026 00:51:22 +0000 (0:00:00.656) 0:02:35.848 ********** 2026-04-05 00:54:05.103513 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.103524 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.103535 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.103546 | orchestrator | 2026-04-05 00:54:05.103558 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-05 00:54:05.103569 | orchestrator | Sunday 05 April 2026 00:51:22 +0000 (0:00:00.361) 0:02:36.209 ********** 2026-04-05 00:54:05.103580 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.103592 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.103603 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.103613 | orchestrator | 2026-04-05 00:54:05.103625 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-05 00:54:05.103635 | orchestrator | Sunday 05 April 2026 00:51:23 +0000 (0:00:00.930) 0:02:37.140 ********** 2026-04-05 00:54:05.103646 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.103658 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.103668 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.103689 | orchestrator | 2026-04-05 00:54:05.103701 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-05 00:54:05.103712 | orchestrator | Sunday 05 April 2026 00:51:24 +0000 (0:00:00.661) 0:02:37.801 ********** 2026-04-05 00:54:05.103723 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.103734 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.103745 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.103756 | orchestrator | 2026-04-05 00:54:05.103767 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-05 00:54:05.103778 | orchestrator | Sunday 05 April 2026 00:51:25 +0000 (0:00:00.910) 0:02:38.712 ********** 2026-04-05 00:54:05.103788 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:54:05.103799 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:54:05.103810 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:54:05.103821 | orchestrator | 2026-04-05 00:54:05.103832 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-05 00:54:05.103843 | orchestrator | Sunday 05 April 2026 00:51:26 +0000 (0:00:00.894) 0:02:39.607 ********** 2026-04-05 00:54:05.103855 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.103866 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.103877 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.103888 | orchestrator | 2026-04-05 00:54:05.103899 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-05 00:54:05.103910 | orchestrator | Sunday 05 April 2026 00:51:26 +0000 (0:00:00.578) 0:02:40.185 ********** 2026-04-05 00:54:05.103921 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.103932 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.103943 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.103954 | orchestrator | 2026-04-05 00:54:05.103965 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-05 00:54:05.103975 | orchestrator | Sunday 05 April 2026 00:51:27 +0000 (0:00:00.407) 0:02:40.593 ********** 2026-04-05 00:54:05.103986 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.103998 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.104008 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.104019 | orchestrator | 2026-04-05 00:54:05.104030 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-05 00:54:05.104041 | orchestrator | Sunday 05 April 2026 00:51:27 +0000 (0:00:00.712) 0:02:41.306 ********** 2026-04-05 00:54:05.104052 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.104070 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.104089 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.104100 | orchestrator | 2026-04-05 00:54:05.104113 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-05 00:54:05.104124 | orchestrator | Sunday 05 April 2026 00:51:28 +0000 (0:00:00.716) 0:02:42.023 ********** 2026-04-05 00:54:05.104136 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 00:54:05.104147 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 00:54:05.104158 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 00:54:05.104169 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 00:54:05.104180 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 00:54:05.104191 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 00:54:05.104202 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 00:54:05.104213 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 00:54:05.104224 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 00:54:05.104249 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-05 00:54:05.104260 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 00:54:05.104271 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 00:54:05.104281 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-05 00:54:05.104292 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 00:54:05.104303 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 00:54:05.104314 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 00:54:05.104325 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 00:54:05.104336 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 00:54:05.104347 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 00:54:05.104373 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 00:54:05.104385 | orchestrator | 2026-04-05 00:54:05.104396 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-05 00:54:05.104407 | orchestrator | 2026-04-05 00:54:05.104418 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-05 00:54:05.104429 | orchestrator | Sunday 05 April 2026 00:51:31 +0000 (0:00:03.422) 0:02:45.445 ********** 2026-04-05 00:54:05.104439 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:54:05.104450 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:54:05.104461 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:54:05.104472 | orchestrator | 2026-04-05 00:54:05.104483 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-05 00:54:05.104494 | orchestrator | Sunday 05 April 2026 00:51:32 +0000 (0:00:00.374) 0:02:45.820 ********** 2026-04-05 00:54:05.104505 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:54:05.104516 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:54:05.104527 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:54:05.104538 | orchestrator | 2026-04-05 00:54:05.104549 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-05 00:54:05.104560 | orchestrator | Sunday 05 April 2026 00:51:32 +0000 (0:00:00.726) 0:02:46.546 ********** 2026-04-05 00:54:05.104571 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:54:05.104582 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:54:05.104592 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:54:05.104603 | orchestrator | 2026-04-05 00:54:05.104614 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-05 00:54:05.104625 | orchestrator | Sunday 05 April 2026 00:51:33 +0000 (0:00:00.464) 0:02:47.011 ********** 2026-04-05 00:54:05.104636 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:54:05.104647 | orchestrator | 2026-04-05 00:54:05.104658 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-05 00:54:05.104669 | orchestrator | Sunday 05 April 2026 00:51:33 +0000 (0:00:00.542) 0:02:47.553 ********** 2026-04-05 00:54:05.104680 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.104691 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.104702 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.104713 | orchestrator | 2026-04-05 00:54:05.104724 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-05 00:54:05.104735 | orchestrator | Sunday 05 April 2026 00:51:34 +0000 (0:00:00.363) 0:02:47.917 ********** 2026-04-05 00:54:05.104746 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.104767 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.104780 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.104799 | orchestrator | 2026-04-05 00:54:05.104819 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-05 00:54:05.104849 | orchestrator | Sunday 05 April 2026 00:51:34 +0000 (0:00:00.446) 0:02:48.363 ********** 2026-04-05 00:54:05.104879 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.104898 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.104909 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.104920 | orchestrator | 2026-04-05 00:54:05.104930 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-05 00:54:05.104941 | orchestrator | Sunday 05 April 2026 00:51:35 +0000 (0:00:00.288) 0:02:48.651 ********** 2026-04-05 00:54:05.104952 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:54:05.104963 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:54:05.104974 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:54:05.104993 | orchestrator | 2026-04-05 00:54:05.105012 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-05 00:54:05.105031 | orchestrator | Sunday 05 April 2026 00:51:35 +0000 (0:00:00.745) 0:02:49.396 ********** 2026-04-05 00:54:05.105051 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:54:05.105069 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:54:05.105088 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:54:05.105102 | orchestrator | 2026-04-05 00:54:05.105113 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-05 00:54:05.105124 | orchestrator | Sunday 05 April 2026 00:51:37 +0000 (0:00:01.325) 0:02:50.722 ********** 2026-04-05 00:54:05.105134 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:54:05.105145 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:54:05.105156 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:54:05.105167 | orchestrator | 2026-04-05 00:54:05.105177 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-05 00:54:05.105188 | orchestrator | Sunday 05 April 2026 00:51:39 +0000 (0:00:01.912) 0:02:52.635 ********** 2026-04-05 00:54:05.105199 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:54:05.105210 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:54:05.105220 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:54:05.105231 | orchestrator | 2026-04-05 00:54:05.105242 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-05 00:54:05.105253 | orchestrator | 2026-04-05 00:54:05.105264 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-05 00:54:05.105275 | orchestrator | Sunday 05 April 2026 00:51:49 +0000 (0:00:10.145) 0:03:02.780 ********** 2026-04-05 00:54:05.105285 | orchestrator | ok: [testbed-manager] 2026-04-05 00:54:05.105296 | orchestrator | 2026-04-05 00:54:05.105307 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-05 00:54:05.105317 | orchestrator | Sunday 05 April 2026 00:51:50 +0000 (0:00:00.957) 0:03:03.737 ********** 2026-04-05 00:54:05.105328 | orchestrator | changed: [testbed-manager] 2026-04-05 00:54:05.105339 | orchestrator | 2026-04-05 00:54:05.105350 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 00:54:05.105418 | orchestrator | Sunday 05 April 2026 00:51:50 +0000 (0:00:00.548) 0:03:04.286 ********** 2026-04-05 00:54:05.105430 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 00:54:05.105441 | orchestrator | 2026-04-05 00:54:05.105453 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 00:54:05.105463 | orchestrator | Sunday 05 April 2026 00:51:51 +0000 (0:00:00.642) 0:03:04.928 ********** 2026-04-05 00:54:05.105474 | orchestrator | changed: [testbed-manager] 2026-04-05 00:54:05.105485 | orchestrator | 2026-04-05 00:54:05.105494 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-05 00:54:05.105504 | orchestrator | Sunday 05 April 2026 00:51:52 +0000 (0:00:01.134) 0:03:06.063 ********** 2026-04-05 00:54:05.105514 | orchestrator | changed: [testbed-manager] 2026-04-05 00:54:05.105533 | orchestrator | 2026-04-05 00:54:05.105543 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-05 00:54:05.105552 | orchestrator | Sunday 05 April 2026 00:51:53 +0000 (0:00:00.649) 0:03:06.713 ********** 2026-04-05 00:54:05.105562 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:54:05.105572 | orchestrator | 2026-04-05 00:54:05.105581 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-05 00:54:05.105591 | orchestrator | Sunday 05 April 2026 00:51:55 +0000 (0:00:01.931) 0:03:08.644 ********** 2026-04-05 00:54:05.105600 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:54:05.105610 | orchestrator | 2026-04-05 00:54:05.105620 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-05 00:54:05.105629 | orchestrator | Sunday 05 April 2026 00:51:56 +0000 (0:00:00.990) 0:03:09.635 ********** 2026-04-05 00:54:05.105639 | orchestrator | changed: [testbed-manager] 2026-04-05 00:54:05.105649 | orchestrator | 2026-04-05 00:54:05.105659 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-05 00:54:05.105668 | orchestrator | Sunday 05 April 2026 00:51:56 +0000 (0:00:00.536) 0:03:10.171 ********** 2026-04-05 00:54:05.105678 | orchestrator | changed: [testbed-manager] 2026-04-05 00:54:05.105687 | orchestrator | 2026-04-05 00:54:05.105697 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-05 00:54:05.105707 | orchestrator | 2026-04-05 00:54:05.105716 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-05 00:54:05.105726 | orchestrator | Sunday 05 April 2026 00:51:57 +0000 (0:00:00.712) 0:03:10.884 ********** 2026-04-05 00:54:05.105735 | orchestrator | ok: [testbed-manager] 2026-04-05 00:54:05.105746 | orchestrator | 2026-04-05 00:54:05.105755 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-05 00:54:05.105765 | orchestrator | Sunday 05 April 2026 00:51:57 +0000 (0:00:00.178) 0:03:11.063 ********** 2026-04-05 00:54:05.105775 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:54:05.105784 | orchestrator | 2026-04-05 00:54:05.105794 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-05 00:54:05.105803 | orchestrator | Sunday 05 April 2026 00:51:57 +0000 (0:00:00.249) 0:03:11.312 ********** 2026-04-05 00:54:05.105812 | orchestrator | ok: [testbed-manager] 2026-04-05 00:54:05.105822 | orchestrator | 2026-04-05 00:54:05.105836 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-05 00:54:05.105853 | orchestrator | Sunday 05 April 2026 00:51:59 +0000 (0:00:01.845) 0:03:13.158 ********** 2026-04-05 00:54:05.105880 | orchestrator | ok: [testbed-manager] 2026-04-05 00:54:05.105892 | orchestrator | 2026-04-05 00:54:05.105908 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-05 00:54:05.105919 | orchestrator | Sunday 05 April 2026 00:52:02 +0000 (0:00:02.682) 0:03:15.841 ********** 2026-04-05 00:54:05.105936 | orchestrator | changed: [testbed-manager] 2026-04-05 00:54:05.105953 | orchestrator | 2026-04-05 00:54:05.105970 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-05 00:54:05.105986 | orchestrator | Sunday 05 April 2026 00:52:03 +0000 (0:00:01.252) 0:03:17.094 ********** 2026-04-05 00:54:05.105996 | orchestrator | ok: [testbed-manager] 2026-04-05 00:54:05.106005 | orchestrator | 2026-04-05 00:54:05.106015 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-05 00:54:05.106061 | orchestrator | Sunday 05 April 2026 00:52:04 +0000 (0:00:00.701) 0:03:17.795 ********** 2026-04-05 00:54:05.106070 | orchestrator | changed: [testbed-manager] 2026-04-05 00:54:05.106080 | orchestrator | 2026-04-05 00:54:05.106089 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-05 00:54:05.106098 | orchestrator | Sunday 05 April 2026 00:52:14 +0000 (0:00:10.659) 0:03:28.455 ********** 2026-04-05 00:54:05.106108 | orchestrator | changed: [testbed-manager] 2026-04-05 00:54:05.106118 | orchestrator | 2026-04-05 00:54:05.106127 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-05 00:54:05.106146 | orchestrator | Sunday 05 April 2026 00:52:32 +0000 (0:00:17.564) 0:03:46.019 ********** 2026-04-05 00:54:05.106155 | orchestrator | ok: [testbed-manager] 2026-04-05 00:54:05.106165 | orchestrator | 2026-04-05 00:54:05.106175 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-05 00:54:05.106184 | orchestrator | 2026-04-05 00:54:05.106194 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-05 00:54:05.106203 | orchestrator | Sunday 05 April 2026 00:52:33 +0000 (0:00:01.049) 0:03:47.068 ********** 2026-04-05 00:54:05.106213 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.106223 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.106233 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.106242 | orchestrator | 2026-04-05 00:54:05.106252 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-05 00:54:05.106261 | orchestrator | Sunday 05 April 2026 00:52:34 +0000 (0:00:00.740) 0:03:47.809 ********** 2026-04-05 00:54:05.106271 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.106280 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.106290 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.106299 | orchestrator | 2026-04-05 00:54:05.106309 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-05 00:54:05.106318 | orchestrator | Sunday 05 April 2026 00:52:34 +0000 (0:00:00.499) 0:03:48.309 ********** 2026-04-05 00:54:05.106328 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:54:05.106338 | orchestrator | 2026-04-05 00:54:05.106347 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-05 00:54:05.106403 | orchestrator | Sunday 05 April 2026 00:52:35 +0000 (0:00:00.661) 0:03:48.971 ********** 2026-04-05 00:54:05.106416 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:54:05.106426 | orchestrator | 2026-04-05 00:54:05.106436 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-05 00:54:05.106451 | orchestrator | Sunday 05 April 2026 00:52:36 +0000 (0:00:01.078) 0:03:50.050 ********** 2026-04-05 00:54:05.106468 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:54:05.106482 | orchestrator | 2026-04-05 00:54:05.106506 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-05 00:54:05.106527 | orchestrator | Sunday 05 April 2026 00:52:37 +0000 (0:00:01.435) 0:03:51.485 ********** 2026-04-05 00:54:05.106543 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.106560 | orchestrator | 2026-04-05 00:54:05.106576 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-05 00:54:05.106592 | orchestrator | Sunday 05 April 2026 00:52:38 +0000 (0:00:00.407) 0:03:51.892 ********** 2026-04-05 00:54:05.106608 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:54:05.106624 | orchestrator | 2026-04-05 00:54:05.106640 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-05 00:54:05.106657 | orchestrator | Sunday 05 April 2026 00:52:39 +0000 (0:00:01.283) 0:03:53.175 ********** 2026-04-05 00:54:05.106676 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.106693 | orchestrator | 2026-04-05 00:54:05.106712 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-05 00:54:05.106730 | orchestrator | Sunday 05 April 2026 00:52:39 +0000 (0:00:00.145) 0:03:53.321 ********** 2026-04-05 00:54:05.106748 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.106766 | orchestrator | 2026-04-05 00:54:05.106784 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-05 00:54:05.106802 | orchestrator | Sunday 05 April 2026 00:52:39 +0000 (0:00:00.145) 0:03:53.466 ********** 2026-04-05 00:54:05.106819 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.106836 | orchestrator | 2026-04-05 00:54:05.106854 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-05 00:54:05.106873 | orchestrator | Sunday 05 April 2026 00:52:40 +0000 (0:00:00.153) 0:03:53.620 ********** 2026-04-05 00:54:05.106906 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.106926 | orchestrator | 2026-04-05 00:54:05.106941 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-05 00:54:05.106955 | orchestrator | Sunday 05 April 2026 00:52:40 +0000 (0:00:00.163) 0:03:53.783 ********** 2026-04-05 00:54:05.106970 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:54:05.106985 | orchestrator | 2026-04-05 00:54:05.107000 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-05 00:54:05.107014 | orchestrator | Sunday 05 April 2026 00:52:46 +0000 (0:00:05.870) 0:03:59.654 ********** 2026-04-05 00:54:05.107022 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-05 00:54:05.107043 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-05 00:54:05.107059 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-05 00:54:05.107068 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-05 00:54:05.107076 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-05 00:54:05.107084 | orchestrator | 2026-04-05 00:54:05.107092 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-05 00:54:05.107100 | orchestrator | Sunday 05 April 2026 00:53:29 +0000 (0:00:43.136) 0:04:42.790 ********** 2026-04-05 00:54:05.107108 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:54:05.107116 | orchestrator | 2026-04-05 00:54:05.107124 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-05 00:54:05.107132 | orchestrator | Sunday 05 April 2026 00:53:31 +0000 (0:00:02.049) 0:04:44.840 ********** 2026-04-05 00:54:05.107140 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:54:05.107147 | orchestrator | 2026-04-05 00:54:05.107155 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-05 00:54:05.107163 | orchestrator | Sunday 05 April 2026 00:53:33 +0000 (0:00:01.926) 0:04:46.767 ********** 2026-04-05 00:54:05.107171 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:54:05.107179 | orchestrator | 2026-04-05 00:54:05.107186 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-05 00:54:05.107194 | orchestrator | Sunday 05 April 2026 00:53:34 +0000 (0:00:01.335) 0:04:48.103 ********** 2026-04-05 00:54:05.107202 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.107210 | orchestrator | 2026-04-05 00:54:05.107218 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-05 00:54:05.107225 | orchestrator | Sunday 05 April 2026 00:53:34 +0000 (0:00:00.141) 0:04:48.244 ********** 2026-04-05 00:54:05.107233 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-05 00:54:05.107241 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-05 00:54:05.107249 | orchestrator | 2026-04-05 00:54:05.107256 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-05 00:54:05.107264 | orchestrator | Sunday 05 April 2026 00:53:37 +0000 (0:00:02.738) 0:04:50.983 ********** 2026-04-05 00:54:05.107272 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.107280 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.107287 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.107295 | orchestrator | 2026-04-05 00:54:05.107303 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-05 00:54:05.107311 | orchestrator | Sunday 05 April 2026 00:53:37 +0000 (0:00:00.369) 0:04:51.352 ********** 2026-04-05 00:54:05.107319 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.107327 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.107334 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.107342 | orchestrator | 2026-04-05 00:54:05.107350 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-05 00:54:05.107382 | orchestrator | 2026-04-05 00:54:05.107391 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-05 00:54:05.107399 | orchestrator | Sunday 05 April 2026 00:53:38 +0000 (0:00:01.063) 0:04:52.415 ********** 2026-04-05 00:54:05.107407 | orchestrator | ok: [testbed-manager] 2026-04-05 00:54:05.107415 | orchestrator | 2026-04-05 00:54:05.107423 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-05 00:54:05.107431 | orchestrator | Sunday 05 April 2026 00:53:39 +0000 (0:00:00.165) 0:04:52.581 ********** 2026-04-05 00:54:05.107438 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:54:05.107446 | orchestrator | 2026-04-05 00:54:05.107455 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-05 00:54:05.107463 | orchestrator | Sunday 05 April 2026 00:53:39 +0000 (0:00:00.489) 0:04:53.071 ********** 2026-04-05 00:54:05.107471 | orchestrator | changed: [testbed-manager] 2026-04-05 00:54:05.107479 | orchestrator | 2026-04-05 00:54:05.107487 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-05 00:54:05.107495 | orchestrator | 2026-04-05 00:54:05.107503 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-05 00:54:05.107510 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:05.552) 0:04:58.623 ********** 2026-04-05 00:54:05.107518 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:54:05.107526 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:54:05.107534 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:54:05.107542 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:54:05.107550 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:54:05.107558 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:54:05.107565 | orchestrator | 2026-04-05 00:54:05.107573 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-05 00:54:05.107581 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:00.720) 0:04:59.344 ********** 2026-04-05 00:54:05.107589 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 00:54:05.107597 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 00:54:05.107605 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 00:54:05.107613 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 00:54:05.107621 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 00:54:05.107629 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 00:54:05.107637 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 00:54:05.107645 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 00:54:05.107664 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 00:54:05.107672 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 00:54:05.107680 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 00:54:05.107688 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 00:54:05.107696 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 00:54:05.107705 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 00:54:05.107712 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 00:54:05.107720 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 00:54:05.107728 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 00:54:05.107741 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 00:54:05.107749 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 00:54:05.107757 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 00:54:05.107765 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 00:54:05.107773 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 00:54:05.107781 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 00:54:05.107789 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 00:54:05.107796 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 00:54:05.107804 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 00:54:05.107812 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 00:54:05.107820 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 00:54:05.107827 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 00:54:05.107840 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 00:54:05.107853 | orchestrator | 2026-04-05 00:54:05.107867 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-05 00:54:05.107882 | orchestrator | Sunday 05 April 2026 00:54:02 +0000 (0:00:16.577) 0:05:15.922 ********** 2026-04-05 00:54:05.107895 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.107910 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.107924 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.107936 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.107947 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.107960 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.107973 | orchestrator | 2026-04-05 00:54:05.107985 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-05 00:54:05.107998 | orchestrator | Sunday 05 April 2026 00:54:02 +0000 (0:00:00.536) 0:05:16.458 ********** 2026-04-05 00:54:05.108012 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:54:05.108026 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:54:05.108039 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:54:05.108049 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:54:05.108056 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:54:05.108064 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:54:05.108072 | orchestrator | 2026-04-05 00:54:05.108080 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:54:05.108089 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:54:05.108099 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-05 00:54:05.108107 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 00:54:05.108115 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 00:54:05.108123 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 00:54:05.108131 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 00:54:05.108145 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 00:54:05.108153 | orchestrator | 2026-04-05 00:54:05.108161 | orchestrator | 2026-04-05 00:54:05.108169 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:54:05.108183 | orchestrator | Sunday 05 April 2026 00:54:03 +0000 (0:00:00.711) 0:05:17.170 ********** 2026-04-05 00:54:05.108196 | orchestrator | =============================================================================== 2026-04-05 00:54:05.108204 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.21s 2026-04-05 00:54:05.108212 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.14s 2026-04-05 00:54:05.108220 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.14s 2026-04-05 00:54:05.108227 | orchestrator | kubectl : Install required packages ------------------------------------ 17.56s 2026-04-05 00:54:05.108235 | orchestrator | Manage labels ---------------------------------------------------------- 16.58s 2026-04-05 00:54:05.108243 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 10.66s 2026-04-05 00:54:05.108251 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.15s 2026-04-05 00:54:05.108259 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.87s 2026-04-05 00:54:05.108267 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.80s 2026-04-05 00:54:05.108275 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.55s 2026-04-05 00:54:05.108282 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 4.38s 2026-04-05 00:54:05.108290 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.86s 2026-04-05 00:54:05.108298 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.56s 2026-04-05 00:54:05.108306 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.42s 2026-04-05 00:54:05.108314 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.24s 2026-04-05 00:54:05.108322 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.13s 2026-04-05 00:54:05.108329 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.89s 2026-04-05 00:54:05.108337 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.74s 2026-04-05 00:54:05.108345 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.68s 2026-04-05 00:54:05.108353 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 2.49s 2026-04-05 00:54:05.108383 | orchestrator | 2026-04-05 00:54:05 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:05.108391 | orchestrator | 2026-04-05 00:54:05 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:05.108399 | orchestrator | 2026-04-05 00:54:05 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:05.108408 | orchestrator | 2026-04-05 00:54:05 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:05.108416 | orchestrator | 2026-04-05 00:54:05 | INFO  | Task 1ecf3d0d-c020-4da6-aaeb-1eb9ba8cf338 is in state STARTED 2026-04-05 00:54:05.109178 | orchestrator | 2026-04-05 00:54:05 | INFO  | Task 02fb6934-48f0-4453-beea-defa69552616 is in state STARTED 2026-04-05 00:54:05.109429 | orchestrator | 2026-04-05 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:08.211947 | orchestrator | 2026-04-05 00:54:08 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:08.212238 | orchestrator | 2026-04-05 00:54:08 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:08.218984 | orchestrator | 2026-04-05 00:54:08 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:08.219842 | orchestrator | 2026-04-05 00:54:08 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:08.226252 | orchestrator | 2026-04-05 00:54:08 | INFO  | Task 1ecf3d0d-c020-4da6-aaeb-1eb9ba8cf338 is in state STARTED 2026-04-05 00:54:08.226869 | orchestrator | 2026-04-05 00:54:08 | INFO  | Task 02fb6934-48f0-4453-beea-defa69552616 is in state STARTED 2026-04-05 00:54:08.226971 | orchestrator | 2026-04-05 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:11.296120 | orchestrator | 2026-04-05 00:54:11 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:11.298313 | orchestrator | 2026-04-05 00:54:11 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:11.300799 | orchestrator | 2026-04-05 00:54:11 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:11.301678 | orchestrator | 2026-04-05 00:54:11 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:11.308711 | orchestrator | 2026-04-05 00:54:11 | INFO  | Task 1ecf3d0d-c020-4da6-aaeb-1eb9ba8cf338 is in state STARTED 2026-04-05 00:54:11.308828 | orchestrator | 2026-04-05 00:54:11 | INFO  | Task 02fb6934-48f0-4453-beea-defa69552616 is in state STARTED 2026-04-05 00:54:11.308913 | orchestrator | 2026-04-05 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:14.371225 | orchestrator | 2026-04-05 00:54:14 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:14.372147 | orchestrator | 2026-04-05 00:54:14 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:14.372191 | orchestrator | 2026-04-05 00:54:14 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:14.372202 | orchestrator | 2026-04-05 00:54:14 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:14.372212 | orchestrator | 2026-04-05 00:54:14 | INFO  | Task 1ecf3d0d-c020-4da6-aaeb-1eb9ba8cf338 is in state SUCCESS 2026-04-05 00:54:14.374901 | orchestrator | 2026-04-05 00:54:14 | INFO  | Task 02fb6934-48f0-4453-beea-defa69552616 is in state STARTED 2026-04-05 00:54:14.374986 | orchestrator | 2026-04-05 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:17.416289 | orchestrator | 2026-04-05 00:54:17 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:17.417183 | orchestrator | 2026-04-05 00:54:17 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:17.420843 | orchestrator | 2026-04-05 00:54:17 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:17.424069 | orchestrator | 2026-04-05 00:54:17 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:17.427558 | orchestrator | 2026-04-05 00:54:17 | INFO  | Task 02fb6934-48f0-4453-beea-defa69552616 is in state STARTED 2026-04-05 00:54:17.427625 | orchestrator | 2026-04-05 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:20.492402 | orchestrator | 2026-04-05 00:54:20 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:20.493337 | orchestrator | 2026-04-05 00:54:20 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:20.493422 | orchestrator | 2026-04-05 00:54:20 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:20.494272 | orchestrator | 2026-04-05 00:54:20 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:20.494800 | orchestrator | 2026-04-05 00:54:20 | INFO  | Task 02fb6934-48f0-4453-beea-defa69552616 is in state SUCCESS 2026-04-05 00:54:20.495017 | orchestrator | 2026-04-05 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:23.529275 | orchestrator | 2026-04-05 00:54:23 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:23.529672 | orchestrator | 2026-04-05 00:54:23 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:23.530692 | orchestrator | 2026-04-05 00:54:23 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:23.531456 | orchestrator | 2026-04-05 00:54:23 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:23.532287 | orchestrator | 2026-04-05 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:26.577166 | orchestrator | 2026-04-05 00:54:26 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:26.577758 | orchestrator | 2026-04-05 00:54:26 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:26.578803 | orchestrator | 2026-04-05 00:54:26 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:26.580964 | orchestrator | 2026-04-05 00:54:26 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:26.580995 | orchestrator | 2026-04-05 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:29.612664 | orchestrator | 2026-04-05 00:54:29 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:29.615952 | orchestrator | 2026-04-05 00:54:29 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:29.618528 | orchestrator | 2026-04-05 00:54:29 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:29.620414 | orchestrator | 2026-04-05 00:54:29 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:29.620538 | orchestrator | 2026-04-05 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:32.658798 | orchestrator | 2026-04-05 00:54:32 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:32.658901 | orchestrator | 2026-04-05 00:54:32 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:32.661987 | orchestrator | 2026-04-05 00:54:32 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:32.665257 | orchestrator | 2026-04-05 00:54:32 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:32.665378 | orchestrator | 2026-04-05 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:35.701274 | orchestrator | 2026-04-05 00:54:35 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:35.705266 | orchestrator | 2026-04-05 00:54:35 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:35.708482 | orchestrator | 2026-04-05 00:54:35 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:35.711136 | orchestrator | 2026-04-05 00:54:35 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:35.712136 | orchestrator | 2026-04-05 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:38.760834 | orchestrator | 2026-04-05 00:54:38 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:38.763104 | orchestrator | 2026-04-05 00:54:38 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:38.764526 | orchestrator | 2026-04-05 00:54:38 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:38.765228 | orchestrator | 2026-04-05 00:54:38 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:38.765263 | orchestrator | 2026-04-05 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:41.803734 | orchestrator | 2026-04-05 00:54:41 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:41.805294 | orchestrator | 2026-04-05 00:54:41 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:41.807013 | orchestrator | 2026-04-05 00:54:41 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:41.807640 | orchestrator | 2026-04-05 00:54:41 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:41.807916 | orchestrator | 2026-04-05 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:44.850284 | orchestrator | 2026-04-05 00:54:44 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:44.851297 | orchestrator | 2026-04-05 00:54:44 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:44.854680 | orchestrator | 2026-04-05 00:54:44 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:44.856154 | orchestrator | 2026-04-05 00:54:44 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:44.856202 | orchestrator | 2026-04-05 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:47.887550 | orchestrator | 2026-04-05 00:54:47 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:47.889449 | orchestrator | 2026-04-05 00:54:47 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:47.889513 | orchestrator | 2026-04-05 00:54:47 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:47.889536 | orchestrator | 2026-04-05 00:54:47 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:47.889557 | orchestrator | 2026-04-05 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:50.915797 | orchestrator | 2026-04-05 00:54:50 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:50.918226 | orchestrator | 2026-04-05 00:54:50 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:50.918840 | orchestrator | 2026-04-05 00:54:50 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:50.919718 | orchestrator | 2026-04-05 00:54:50 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:50.919750 | orchestrator | 2026-04-05 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:53.949592 | orchestrator | 2026-04-05 00:54:53 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:53.949864 | orchestrator | 2026-04-05 00:54:53 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:53.951045 | orchestrator | 2026-04-05 00:54:53 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:53.951773 | orchestrator | 2026-04-05 00:54:53 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:53.951902 | orchestrator | 2026-04-05 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:56.991696 | orchestrator | 2026-04-05 00:54:56 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:54:56.992459 | orchestrator | 2026-04-05 00:54:56 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:54:56.994583 | orchestrator | 2026-04-05 00:54:56 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:54:56.996261 | orchestrator | 2026-04-05 00:54:56 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:54:56.997012 | orchestrator | 2026-04-05 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:00.046614 | orchestrator | 2026-04-05 00:55:00 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:00.046802 | orchestrator | 2026-04-05 00:55:00 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:00.047577 | orchestrator | 2026-04-05 00:55:00 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:00.048484 | orchestrator | 2026-04-05 00:55:00 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:00.048508 | orchestrator | 2026-04-05 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:03.099592 | orchestrator | 2026-04-05 00:55:03 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:03.100702 | orchestrator | 2026-04-05 00:55:03 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:03.102936 | orchestrator | 2026-04-05 00:55:03 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:03.105827 | orchestrator | 2026-04-05 00:55:03 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:03.105868 | orchestrator | 2026-04-05 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:06.199815 | orchestrator | 2026-04-05 00:55:06 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:06.200279 | orchestrator | 2026-04-05 00:55:06 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:06.201095 | orchestrator | 2026-04-05 00:55:06 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:06.202552 | orchestrator | 2026-04-05 00:55:06 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:06.202965 | orchestrator | 2026-04-05 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:09.241663 | orchestrator | 2026-04-05 00:55:09 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:09.241864 | orchestrator | 2026-04-05 00:55:09 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:09.242496 | orchestrator | 2026-04-05 00:55:09 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:09.243239 | orchestrator | 2026-04-05 00:55:09 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:09.243263 | orchestrator | 2026-04-05 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:12.320587 | orchestrator | 2026-04-05 00:55:12 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:12.321368 | orchestrator | 2026-04-05 00:55:12 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:12.322180 | orchestrator | 2026-04-05 00:55:12 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:12.323912 | orchestrator | 2026-04-05 00:55:12 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:12.323965 | orchestrator | 2026-04-05 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:15.376731 | orchestrator | 2026-04-05 00:55:15 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:15.378563 | orchestrator | 2026-04-05 00:55:15 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:15.379466 | orchestrator | 2026-04-05 00:55:15 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:15.380739 | orchestrator | 2026-04-05 00:55:15 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:15.380984 | orchestrator | 2026-04-05 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:18.450483 | orchestrator | 2026-04-05 00:55:18 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:18.450599 | orchestrator | 2026-04-05 00:55:18 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:18.450990 | orchestrator | 2026-04-05 00:55:18 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:18.452115 | orchestrator | 2026-04-05 00:55:18 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:18.452176 | orchestrator | 2026-04-05 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:21.484975 | orchestrator | 2026-04-05 00:55:21 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:21.485852 | orchestrator | 2026-04-05 00:55:21 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:21.487044 | orchestrator | 2026-04-05 00:55:21 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:21.487888 | orchestrator | 2026-04-05 00:55:21 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:21.487949 | orchestrator | 2026-04-05 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:24.524242 | orchestrator | 2026-04-05 00:55:24 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:24.524475 | orchestrator | 2026-04-05 00:55:24 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:24.525388 | orchestrator | 2026-04-05 00:55:24 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:24.526102 | orchestrator | 2026-04-05 00:55:24 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:24.526167 | orchestrator | 2026-04-05 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:27.576480 | orchestrator | 2026-04-05 00:55:27 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:27.579544 | orchestrator | 2026-04-05 00:55:27 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:27.581509 | orchestrator | 2026-04-05 00:55:27 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:27.583899 | orchestrator | 2026-04-05 00:55:27 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:27.583939 | orchestrator | 2026-04-05 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:30.635496 | orchestrator | 2026-04-05 00:55:30 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:30.638290 | orchestrator | 2026-04-05 00:55:30 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:30.640100 | orchestrator | 2026-04-05 00:55:30 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:30.641593 | orchestrator | 2026-04-05 00:55:30 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:30.641640 | orchestrator | 2026-04-05 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:33.670241 | orchestrator | 2026-04-05 00:55:33 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:33.670519 | orchestrator | 2026-04-05 00:55:33 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:33.671044 | orchestrator | 2026-04-05 00:55:33 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:33.673547 | orchestrator | 2026-04-05 00:55:33 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:33.673615 | orchestrator | 2026-04-05 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:36.718324 | orchestrator | 2026-04-05 00:55:36 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:36.719737 | orchestrator | 2026-04-05 00:55:36 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:36.720904 | orchestrator | 2026-04-05 00:55:36 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:36.721641 | orchestrator | 2026-04-05 00:55:36 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:36.721696 | orchestrator | 2026-04-05 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:39.758828 | orchestrator | 2026-04-05 00:55:39 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:39.761574 | orchestrator | 2026-04-05 00:55:39 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:39.763695 | orchestrator | 2026-04-05 00:55:39 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:39.765919 | orchestrator | 2026-04-05 00:55:39 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:39.765963 | orchestrator | 2026-04-05 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:42.805409 | orchestrator | 2026-04-05 00:55:42 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:42.805582 | orchestrator | 2026-04-05 00:55:42 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:42.806227 | orchestrator | 2026-04-05 00:55:42 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:42.807096 | orchestrator | 2026-04-05 00:55:42 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:42.807117 | orchestrator | 2026-04-05 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:45.842535 | orchestrator | 2026-04-05 00:55:45 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:45.845621 | orchestrator | 2026-04-05 00:55:45 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:45.848197 | orchestrator | 2026-04-05 00:55:45 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:45.850557 | orchestrator | 2026-04-05 00:55:45 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:45.850965 | orchestrator | 2026-04-05 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:48.886312 | orchestrator | 2026-04-05 00:55:48 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:48.888691 | orchestrator | 2026-04-05 00:55:48 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:48.892089 | orchestrator | 2026-04-05 00:55:48 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state STARTED 2026-04-05 00:55:48.893968 | orchestrator | 2026-04-05 00:55:48 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:48.896745 | orchestrator | 2026-04-05 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:51.942121 | orchestrator | 2026-04-05 00:55:51 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:51.943813 | orchestrator | 2026-04-05 00:55:51 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:51.945634 | orchestrator | 2026-04-05 00:55:51 | INFO  | Task 3a21a578-ab24-4c01-9524-7a77190c4f11 is in state SUCCESS 2026-04-05 00:55:51.945990 | orchestrator | 2026-04-05 00:55:51.946047 | orchestrator | 2026-04-05 00:55:51.946059 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-05 00:55:51.946068 | orchestrator | 2026-04-05 00:55:51.946076 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 00:55:51.946084 | orchestrator | Sunday 05 April 2026 00:54:08 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-04-05 00:55:51.946092 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 00:55:51.946100 | orchestrator | 2026-04-05 00:55:51.946107 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 00:55:51.946115 | orchestrator | Sunday 05 April 2026 00:54:10 +0000 (0:00:01.161) 0:00:01.418 ********** 2026-04-05 00:55:51.946123 | orchestrator | changed: [testbed-manager] 2026-04-05 00:55:51.946131 | orchestrator | 2026-04-05 00:55:51.946139 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-05 00:55:51.946146 | orchestrator | Sunday 05 April 2026 00:54:12 +0000 (0:00:01.984) 0:00:03.403 ********** 2026-04-05 00:55:51.946153 | orchestrator | changed: [testbed-manager] 2026-04-05 00:55:51.946160 | orchestrator | 2026-04-05 00:55:51.946168 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:55:51.946176 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:55:51.946185 | orchestrator | 2026-04-05 00:55:51.946192 | orchestrator | 2026-04-05 00:55:51.946226 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:55:51.946275 | orchestrator | Sunday 05 April 2026 00:54:12 +0000 (0:00:00.767) 0:00:04.171 ********** 2026-04-05 00:55:51.946284 | orchestrator | =============================================================================== 2026-04-05 00:55:51.946292 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.98s 2026-04-05 00:55:51.946299 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.16s 2026-04-05 00:55:51.946306 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.77s 2026-04-05 00:55:51.946314 | orchestrator | 2026-04-05 00:55:51.946321 | orchestrator | 2026-04-05 00:55:51.946328 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-05 00:55:51.946335 | orchestrator | 2026-04-05 00:55:51.946342 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-05 00:55:51.946350 | orchestrator | Sunday 05 April 2026 00:54:08 +0000 (0:00:00.315) 0:00:00.315 ********** 2026-04-05 00:55:51.946357 | orchestrator | ok: [testbed-manager] 2026-04-05 00:55:51.946385 | orchestrator | 2026-04-05 00:55:51.946393 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-05 00:55:51.946400 | orchestrator | Sunday 05 April 2026 00:54:09 +0000 (0:00:01.014) 0:00:01.329 ********** 2026-04-05 00:55:51.946407 | orchestrator | ok: [testbed-manager] 2026-04-05 00:55:51.946415 | orchestrator | 2026-04-05 00:55:51.946422 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 00:55:51.946429 | orchestrator | Sunday 05 April 2026 00:54:10 +0000 (0:00:00.805) 0:00:02.135 ********** 2026-04-05 00:55:51.946436 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 00:55:51.946479 | orchestrator | 2026-04-05 00:55:51.946488 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 00:55:51.946495 | orchestrator | Sunday 05 April 2026 00:54:12 +0000 (0:00:01.300) 0:00:03.436 ********** 2026-04-05 00:55:51.946502 | orchestrator | changed: [testbed-manager] 2026-04-05 00:55:51.946509 | orchestrator | 2026-04-05 00:55:51.946516 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-05 00:55:51.946523 | orchestrator | Sunday 05 April 2026 00:54:13 +0000 (0:00:01.639) 0:00:05.075 ********** 2026-04-05 00:55:51.946531 | orchestrator | changed: [testbed-manager] 2026-04-05 00:55:51.946538 | orchestrator | 2026-04-05 00:55:51.946545 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-05 00:55:51.946552 | orchestrator | Sunday 05 April 2026 00:54:14 +0000 (0:00:00.920) 0:00:05.996 ********** 2026-04-05 00:55:51.946559 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:55:51.946567 | orchestrator | 2026-04-05 00:55:51.946574 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-05 00:55:51.946581 | orchestrator | Sunday 05 April 2026 00:54:16 +0000 (0:00:02.033) 0:00:08.030 ********** 2026-04-05 00:55:51.946588 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:55:51.946595 | orchestrator | 2026-04-05 00:55:51.946602 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-05 00:55:51.946609 | orchestrator | Sunday 05 April 2026 00:54:17 +0000 (0:00:01.073) 0:00:09.103 ********** 2026-04-05 00:55:51.946618 | orchestrator | ok: [testbed-manager] 2026-04-05 00:55:51.946626 | orchestrator | 2026-04-05 00:55:51.946635 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-05 00:55:51.946643 | orchestrator | Sunday 05 April 2026 00:54:18 +0000 (0:00:00.462) 0:00:09.566 ********** 2026-04-05 00:55:51.946652 | orchestrator | ok: [testbed-manager] 2026-04-05 00:55:51.946661 | orchestrator | 2026-04-05 00:55:51.946669 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:55:51.946703 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:55:51.946713 | orchestrator | 2026-04-05 00:55:51.946721 | orchestrator | 2026-04-05 00:55:51.946729 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:55:51.946757 | orchestrator | Sunday 05 April 2026 00:54:18 +0000 (0:00:00.354) 0:00:09.921 ********** 2026-04-05 00:55:51.946765 | orchestrator | =============================================================================== 2026-04-05 00:55:51.946774 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.03s 2026-04-05 00:55:51.946783 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.64s 2026-04-05 00:55:51.946821 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.30s 2026-04-05 00:55:51.946841 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.07s 2026-04-05 00:55:51.946851 | orchestrator | Get home directory of operator user ------------------------------------- 1.02s 2026-04-05 00:55:51.946859 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.92s 2026-04-05 00:55:51.946867 | orchestrator | Create .kube directory -------------------------------------------------- 0.81s 2026-04-05 00:55:51.946875 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.46s 2026-04-05 00:55:51.946890 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2026-04-05 00:55:51.946899 | orchestrator | 2026-04-05 00:55:51.947640 | orchestrator | 2026-04-05 00:55:51.947666 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-05 00:55:51.947674 | orchestrator | 2026-04-05 00:55:51.947681 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-05 00:55:51.947689 | orchestrator | Sunday 05 April 2026 00:52:21 +0000 (0:00:00.178) 0:00:00.178 ********** 2026-04-05 00:55:51.947697 | orchestrator | ok: [localhost] => { 2026-04-05 00:55:51.947705 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-05 00:55:51.947713 | orchestrator | } 2026-04-05 00:55:51.947721 | orchestrator | 2026-04-05 00:55:51.947728 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-05 00:55:51.947735 | orchestrator | Sunday 05 April 2026 00:52:22 +0000 (0:00:00.135) 0:00:00.314 ********** 2026-04-05 00:55:51.947748 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-05 00:55:51.947757 | orchestrator | ...ignoring 2026-04-05 00:55:51.947765 | orchestrator | 2026-04-05 00:55:51.947772 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-05 00:55:51.947779 | orchestrator | Sunday 05 April 2026 00:52:26 +0000 (0:00:04.317) 0:00:04.631 ********** 2026-04-05 00:55:51.947786 | orchestrator | skipping: [localhost] 2026-04-05 00:55:51.947793 | orchestrator | 2026-04-05 00:55:51.947833 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-05 00:55:51.947841 | orchestrator | Sunday 05 April 2026 00:52:26 +0000 (0:00:00.163) 0:00:04.795 ********** 2026-04-05 00:55:51.947849 | orchestrator | ok: [localhost] 2026-04-05 00:55:51.947856 | orchestrator | 2026-04-05 00:55:51.947863 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:55:51.947870 | orchestrator | 2026-04-05 00:55:51.947877 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:55:51.947884 | orchestrator | Sunday 05 April 2026 00:52:27 +0000 (0:00:00.763) 0:00:05.558 ********** 2026-04-05 00:55:51.947892 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:55:51.947899 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:55:51.947906 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:55:51.947913 | orchestrator | 2026-04-05 00:55:51.947921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:55:51.947928 | orchestrator | Sunday 05 April 2026 00:52:28 +0000 (0:00:00.748) 0:00:06.306 ********** 2026-04-05 00:55:51.947935 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-05 00:55:51.947943 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-05 00:55:51.947950 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-05 00:55:51.947957 | orchestrator | 2026-04-05 00:55:51.947965 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-05 00:55:51.947972 | orchestrator | 2026-04-05 00:55:51.947979 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 00:55:51.947986 | orchestrator | Sunday 05 April 2026 00:52:29 +0000 (0:00:01.592) 0:00:07.899 ********** 2026-04-05 00:55:51.947994 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:55:51.948001 | orchestrator | 2026-04-05 00:55:51.948008 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-05 00:55:51.948015 | orchestrator | Sunday 05 April 2026 00:52:32 +0000 (0:00:02.662) 0:00:10.562 ********** 2026-04-05 00:55:51.948022 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:55:51.948030 | orchestrator | 2026-04-05 00:55:51.948037 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-05 00:55:51.948044 | orchestrator | Sunday 05 April 2026 00:52:34 +0000 (0:00:02.220) 0:00:12.783 ********** 2026-04-05 00:55:51.948081 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:55:51.948090 | orchestrator | 2026-04-05 00:55:51.948097 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-05 00:55:51.948104 | orchestrator | Sunday 05 April 2026 00:52:35 +0000 (0:00:00.655) 0:00:13.438 ********** 2026-04-05 00:55:51.948111 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:55:51.948118 | orchestrator | 2026-04-05 00:55:51.948125 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-05 00:55:51.948133 | orchestrator | Sunday 05 April 2026 00:52:35 +0000 (0:00:00.444) 0:00:13.883 ********** 2026-04-05 00:55:51.948140 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:55:51.948147 | orchestrator | 2026-04-05 00:55:51.948154 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-05 00:55:51.948161 | orchestrator | Sunday 05 April 2026 00:52:36 +0000 (0:00:00.469) 0:00:14.353 ********** 2026-04-05 00:55:51.948168 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:55:51.948175 | orchestrator | 2026-04-05 00:55:51.948182 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 00:55:51.948189 | orchestrator | Sunday 05 April 2026 00:52:36 +0000 (0:00:00.749) 0:00:15.103 ********** 2026-04-05 00:55:51.948197 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:55:51.948204 | orchestrator | 2026-04-05 00:55:51.948212 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-05 00:55:51.948219 | orchestrator | Sunday 05 April 2026 00:52:37 +0000 (0:00:00.986) 0:00:16.090 ********** 2026-04-05 00:55:51.948226 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:55:51.948289 | orchestrator | 2026-04-05 00:55:51.948299 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-05 00:55:51.948308 | orchestrator | Sunday 05 April 2026 00:52:39 +0000 (0:00:01.379) 0:00:17.470 ********** 2026-04-05 00:55:51.948316 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:55:51.948324 | orchestrator | 2026-04-05 00:55:51.948333 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-05 00:55:51.948341 | orchestrator | Sunday 05 April 2026 00:52:42 +0000 (0:00:02.887) 0:00:20.357 ********** 2026-04-05 00:55:51.948349 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:55:51.948358 | orchestrator | 2026-04-05 00:55:51.948379 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-05 00:55:51.948388 | orchestrator | Sunday 05 April 2026 00:52:42 +0000 (0:00:00.473) 0:00:20.831 ********** 2026-04-05 00:55:51.948404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.948418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.948434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.948443 | orchestrator | 2026-04-05 00:55:51.948452 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-05 00:55:51.948461 | orchestrator | Sunday 05 April 2026 00:52:44 +0000 (0:00:01.713) 0:00:22.545 ********** 2026-04-05 00:55:51.948476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.948489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.948503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.948513 | orchestrator | 2026-04-05 00:55:51.948521 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-05 00:55:51.948530 | orchestrator | Sunday 05 April 2026 00:52:46 +0000 (0:00:02.662) 0:00:25.207 ********** 2026-04-05 00:55:51.948538 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 00:55:51.948545 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 00:55:51.948552 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 00:55:51.948559 | orchestrator | 2026-04-05 00:55:51.948566 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-05 00:55:51.948574 | orchestrator | Sunday 05 April 2026 00:52:49 +0000 (0:00:02.435) 0:00:27.643 ********** 2026-04-05 00:55:51.948580 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 00:55:51.948587 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 00:55:51.948594 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 00:55:51.948600 | orchestrator | 2026-04-05 00:55:51.948607 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-05 00:55:51.948613 | orchestrator | Sunday 05 April 2026 00:52:51 +0000 (0:00:02.301) 0:00:29.944 ********** 2026-04-05 00:55:51.948620 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 00:55:51.948627 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 00:55:51.948633 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 00:55:51.948640 | orchestrator | 2026-04-05 00:55:51.948646 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-05 00:55:51.948653 | orchestrator | Sunday 05 April 2026 00:52:53 +0000 (0:00:01.607) 0:00:31.552 ********** 2026-04-05 00:55:51.948664 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 00:55:51.948671 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 00:55:51.948677 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 00:55:51.948684 | orchestrator | 2026-04-05 00:55:51.948691 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-05 00:55:51.948697 | orchestrator | Sunday 05 April 2026 00:52:55 +0000 (0:00:01.756) 0:00:33.308 ********** 2026-04-05 00:55:51.948704 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 00:55:51.948718 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 00:55:51.948725 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 00:55:51.948731 | orchestrator | 2026-04-05 00:55:51.948738 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-05 00:55:51.948745 | orchestrator | Sunday 05 April 2026 00:52:56 +0000 (0:00:01.588) 0:00:34.897 ********** 2026-04-05 00:55:51.948751 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 00:55:51.948758 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 00:55:51.948764 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 00:55:51.948771 | orchestrator | 2026-04-05 00:55:51.948778 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 00:55:51.948784 | orchestrator | Sunday 05 April 2026 00:52:58 +0000 (0:00:01.624) 0:00:36.521 ********** 2026-04-05 00:55:51.948791 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:55:51.948798 | orchestrator | 2026-04-05 00:55:51.948804 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-05 00:55:51.948811 | orchestrator | Sunday 05 April 2026 00:52:59 +0000 (0:00:01.518) 0:00:38.040 ********** 2026-04-05 00:55:51.948818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.948825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.948842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.948854 | orchestrator | 2026-04-05 00:55:51.948860 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-05 00:55:51.948867 | orchestrator | Sunday 05 April 2026 00:53:01 +0000 (0:00:01.344) 0:00:39.385 ********** 2026-04-05 00:55:51.948874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:55:51.948881 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:55:51.948888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:55:51.948896 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:55:51.948907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:55:51.948918 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:55:51.948925 | orchestrator | 2026-04-05 00:55:51.948932 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-05 00:55:51.948938 | orchestrator | Sunday 05 April 2026 00:53:02 +0000 (0:00:00.927) 0:00:40.312 ********** 2026-04-05 00:55:51.948948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:55:51.948956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:55:51.948963 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:55:51.948970 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:55:51.948977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:55:51.948985 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:55:51.948991 | orchestrator | 2026-04-05 00:55:51.949003 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-05 00:55:51.949010 | orchestrator | Sunday 05 April 2026 00:53:03 +0000 (0:00:01.157) 0:00:41.470 ********** 2026-04-05 00:55:51.949025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.949033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.949041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:55:51.949048 | orchestrator | 2026-04-05 00:55:51.949055 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-05 00:55:51.949062 | orchestrator | Sunday 05 April 2026 00:53:05 +0000 (0:00:02.081) 0:00:43.551 ********** 2026-04-05 00:55:51.949068 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:55:51.949075 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:55:51.949082 | orchestrator | } 2026-04-05 00:55:51.949088 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:55:51.949095 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:55:51.949102 | orchestrator | } 2026-04-05 00:55:51.949115 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:55:51.949122 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:55:51.949129 | orchestrator | } 2026-04-05 00:55:51.949135 | orchestrator | 2026-04-05 00:55:51.949142 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:55:51.949148 | orchestrator | Sunday 05 April 2026 00:53:06 +0000 (0:00:01.219) 0:00:44.771 ********** 2026-04-05 00:55:51.949161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:55:51.949168 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:55:51.949178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:55:51.949185 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:55:51.949192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:55:51.949200 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:55:51.949206 | orchestrator | 2026-04-05 00:55:51.949213 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-05 00:55:51.949220 | orchestrator | Sunday 05 April 2026 00:53:07 +0000 (0:00:01.010) 0:00:45.782 ********** 2026-04-05 00:55:51.949232 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:55:51.949252 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:55:51.949259 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:55:51.949266 | orchestrator | 2026-04-05 00:55:51.949272 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-05 00:55:51.949279 | orchestrator | Sunday 05 April 2026 00:53:08 +0000 (0:00:00.961) 0:00:46.743 ********** 2026-04-05 00:55:51.949286 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:55:51.949292 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:55:51.949299 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:55:51.949305 | orchestrator | 2026-04-05 00:55:51.949312 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-05 00:55:51.949319 | orchestrator | Sunday 05 April 2026 00:53:18 +0000 (0:00:09.987) 0:00:56.731 ********** 2026-04-05 00:55:51.949325 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:55:51.949332 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:55:51.949338 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:55:51.949345 | orchestrator | 2026-04-05 00:55:51.949352 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 00:55:51.949358 | orchestrator | 2026-04-05 00:55:51.949365 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 00:55:51.949372 | orchestrator | Sunday 05 April 2026 00:53:18 +0000 (0:00:00.364) 0:00:57.096 ********** 2026-04-05 00:55:51.949378 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:55:51.949385 | orchestrator | 2026-04-05 00:55:51.949392 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 00:55:51.949398 | orchestrator | Sunday 05 April 2026 00:53:19 +0000 (0:00:00.628) 0:00:57.724 ********** 2026-04-05 00:55:51.949405 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:55:51.949411 | orchestrator | 2026-04-05 00:55:51.949418 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 00:55:51.949424 | orchestrator | Sunday 05 April 2026 00:53:19 +0000 (0:00:00.115) 0:00:57.840 ********** 2026-04-05 00:55:51.949431 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:55:51.949438 | orchestrator | 2026-04-05 00:55:51.949447 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 00:55:51.949454 | orchestrator | Sunday 05 April 2026 00:53:21 +0000 (0:00:01.897) 0:00:59.738 ********** 2026-04-05 00:55:51.949461 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:55:51.949467 | orchestrator | 2026-04-05 00:55:51.949474 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 00:55:51.949480 | orchestrator | 2026-04-05 00:55:51.949487 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 00:55:51.949494 | orchestrator | Sunday 05 April 2026 00:55:14 +0000 (0:01:53.431) 0:02:53.169 ********** 2026-04-05 00:55:51.949500 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:55:51.949507 | orchestrator | 2026-04-05 00:55:51.949513 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 00:55:51.949523 | orchestrator | Sunday 05 April 2026 00:55:15 +0000 (0:00:00.749) 0:02:53.918 ********** 2026-04-05 00:55:51.949530 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:55:51.949536 | orchestrator | 2026-04-05 00:55:51.949543 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 00:55:51.949550 | orchestrator | Sunday 05 April 2026 00:55:15 +0000 (0:00:00.118) 0:02:54.037 ********** 2026-04-05 00:55:51.949556 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:55:51.949563 | orchestrator | 2026-04-05 00:55:51.949569 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 00:55:51.949576 | orchestrator | Sunday 05 April 2026 00:55:17 +0000 (0:00:01.976) 0:02:56.014 ********** 2026-04-05 00:55:51.949582 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:55:51.949589 | orchestrator | 2026-04-05 00:55:51.949595 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 00:55:51.949607 | orchestrator | 2026-04-05 00:55:51.949614 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 00:55:51.949620 | orchestrator | Sunday 05 April 2026 00:55:30 +0000 (0:00:12.633) 0:03:08.647 ********** 2026-04-05 00:55:51.949627 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:55:51.949633 | orchestrator | 2026-04-05 00:55:51.949640 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 00:55:51.949646 | orchestrator | Sunday 05 April 2026 00:55:31 +0000 (0:00:00.731) 0:03:09.379 ********** 2026-04-05 00:55:51.949653 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:55:51.949660 | orchestrator | 2026-04-05 00:55:51.949666 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 00:55:51.949673 | orchestrator | Sunday 05 April 2026 00:55:31 +0000 (0:00:00.144) 0:03:09.523 ********** 2026-04-05 00:55:51.949679 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:55:51.949686 | orchestrator | 2026-04-05 00:55:51.949692 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 00:55:51.949699 | orchestrator | Sunday 05 April 2026 00:55:33 +0000 (0:00:01.837) 0:03:11.360 ********** 2026-04-05 00:55:51.949706 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:55:51.949712 | orchestrator | 2026-04-05 00:55:51.949719 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-05 00:55:51.949725 | orchestrator | 2026-04-05 00:55:51.949732 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-05 00:55:51.949738 | orchestrator | Sunday 05 April 2026 00:55:44 +0000 (0:00:11.773) 0:03:23.134 ********** 2026-04-05 00:55:51.949745 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:55:51.949751 | orchestrator | 2026-04-05 00:55:51.949758 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-05 00:55:51.949765 | orchestrator | Sunday 05 April 2026 00:55:45 +0000 (0:00:00.656) 0:03:23.790 ********** 2026-04-05 00:55:51.949771 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:55:51.949778 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:55:51.949784 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:55:51.949791 | orchestrator | 2026-04-05 00:55:51.949797 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:55:51.949804 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-05 00:55:51.949812 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-04-05 00:55:51.949819 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:55:51.949825 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:55:51.949832 | orchestrator | 2026-04-05 00:55:51.949838 | orchestrator | 2026-04-05 00:55:51.949845 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:55:51.949851 | orchestrator | Sunday 05 April 2026 00:55:48 +0000 (0:00:03.372) 0:03:27.163 ********** 2026-04-05 00:55:51.949858 | orchestrator | =============================================================================== 2026-04-05 00:55:51.949864 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 137.84s 2026-04-05 00:55:51.949871 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.99s 2026-04-05 00:55:51.949878 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.71s 2026-04-05 00:55:51.949884 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.32s 2026-04-05 00:55:51.949891 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.37s 2026-04-05 00:55:51.949897 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.89s 2026-04-05 00:55:51.949908 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.66s 2026-04-05 00:55:51.949918 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.66s 2026-04-05 00:55:51.949925 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.44s 2026-04-05 00:55:51.949932 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.30s 2026-04-05 00:55:51.949938 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.23s 2026-04-05 00:55:51.949945 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.11s 2026-04-05 00:55:51.949951 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.08s 2026-04-05 00:55:51.949958 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.76s 2026-04-05 00:55:51.949965 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.71s 2026-04-05 00:55:51.949974 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.62s 2026-04-05 00:55:51.949981 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.61s 2026-04-05 00:55:51.949987 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.59s 2026-04-05 00:55:51.949994 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.59s 2026-04-05 00:55:51.950001 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.52s 2026-04-05 00:55:51.950154 | orchestrator | 2026-04-05 00:55:51 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:51.950168 | orchestrator | 2026-04-05 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:54.994331 | orchestrator | 2026-04-05 00:55:54 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:54.997796 | orchestrator | 2026-04-05 00:55:54 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:54.999686 | orchestrator | 2026-04-05 00:55:54 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:54.999751 | orchestrator | 2026-04-05 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:58.044699 | orchestrator | 2026-04-05 00:55:58 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:55:58.049110 | orchestrator | 2026-04-05 00:55:58 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:55:58.051010 | orchestrator | 2026-04-05 00:55:58 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:55:58.051037 | orchestrator | 2026-04-05 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:01.115398 | orchestrator | 2026-04-05 00:56:01 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:01.117959 | orchestrator | 2026-04-05 00:56:01 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:01.119744 | orchestrator | 2026-04-05 00:56:01 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:01.119848 | orchestrator | 2026-04-05 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:04.149916 | orchestrator | 2026-04-05 00:56:04 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:04.150904 | orchestrator | 2026-04-05 00:56:04 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:04.150971 | orchestrator | 2026-04-05 00:56:04 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:04.151041 | orchestrator | 2026-04-05 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:07.178310 | orchestrator | 2026-04-05 00:56:07 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:07.178588 | orchestrator | 2026-04-05 00:56:07 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:07.179362 | orchestrator | 2026-04-05 00:56:07 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:07.179405 | orchestrator | 2026-04-05 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:10.204506 | orchestrator | 2026-04-05 00:56:10 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:10.205750 | orchestrator | 2026-04-05 00:56:10 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:10.210074 | orchestrator | 2026-04-05 00:56:10 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:10.210127 | orchestrator | 2026-04-05 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:13.260502 | orchestrator | 2026-04-05 00:56:13 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:13.262680 | orchestrator | 2026-04-05 00:56:13 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:13.265192 | orchestrator | 2026-04-05 00:56:13 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:13.265267 | orchestrator | 2026-04-05 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:16.305414 | orchestrator | 2026-04-05 00:56:16 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:16.305833 | orchestrator | 2026-04-05 00:56:16 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:16.307187 | orchestrator | 2026-04-05 00:56:16 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:16.307237 | orchestrator | 2026-04-05 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:19.350621 | orchestrator | 2026-04-05 00:56:19 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:19.351313 | orchestrator | 2026-04-05 00:56:19 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:19.351717 | orchestrator | 2026-04-05 00:56:19 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:19.351744 | orchestrator | 2026-04-05 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:22.393455 | orchestrator | 2026-04-05 00:56:22 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:22.394280 | orchestrator | 2026-04-05 00:56:22 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:22.396350 | orchestrator | 2026-04-05 00:56:22 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:22.396404 | orchestrator | 2026-04-05 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:25.423093 | orchestrator | 2026-04-05 00:56:25 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:25.424455 | orchestrator | 2026-04-05 00:56:25 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:25.426177 | orchestrator | 2026-04-05 00:56:25 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:25.426217 | orchestrator | 2026-04-05 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:28.478557 | orchestrator | 2026-04-05 00:56:28 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:28.478670 | orchestrator | 2026-04-05 00:56:28 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:28.478764 | orchestrator | 2026-04-05 00:56:28 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:28.478781 | orchestrator | 2026-04-05 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:31.507145 | orchestrator | 2026-04-05 00:56:31 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:31.508463 | orchestrator | 2026-04-05 00:56:31 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:31.509147 | orchestrator | 2026-04-05 00:56:31 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:31.509248 | orchestrator | 2026-04-05 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:34.538646 | orchestrator | 2026-04-05 00:56:34 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:34.538863 | orchestrator | 2026-04-05 00:56:34 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:34.539615 | orchestrator | 2026-04-05 00:56:34 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:34.539627 | orchestrator | 2026-04-05 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:37.573238 | orchestrator | 2026-04-05 00:56:37 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:37.573303 | orchestrator | 2026-04-05 00:56:37 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:37.573312 | orchestrator | 2026-04-05 00:56:37 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:37.573319 | orchestrator | 2026-04-05 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:40.610451 | orchestrator | 2026-04-05 00:56:40 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:40.611132 | orchestrator | 2026-04-05 00:56:40 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:40.612084 | orchestrator | 2026-04-05 00:56:40 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:40.612228 | orchestrator | 2026-04-05 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:43.644336 | orchestrator | 2026-04-05 00:56:43 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:43.645274 | orchestrator | 2026-04-05 00:56:43 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:43.646331 | orchestrator | 2026-04-05 00:56:43 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:43.646377 | orchestrator | 2026-04-05 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:46.680023 | orchestrator | 2026-04-05 00:56:46 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:46.680439 | orchestrator | 2026-04-05 00:56:46 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:46.681463 | orchestrator | 2026-04-05 00:56:46 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:46.681513 | orchestrator | 2026-04-05 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:49.726517 | orchestrator | 2026-04-05 00:56:49 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:49.730644 | orchestrator | 2026-04-05 00:56:49 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:49.731411 | orchestrator | 2026-04-05 00:56:49 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:49.731504 | orchestrator | 2026-04-05 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:52.778612 | orchestrator | 2026-04-05 00:56:52 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:52.779317 | orchestrator | 2026-04-05 00:56:52 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:52.781252 | orchestrator | 2026-04-05 00:56:52 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:52.781362 | orchestrator | 2026-04-05 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:55.811361 | orchestrator | 2026-04-05 00:56:55 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:55.811807 | orchestrator | 2026-04-05 00:56:55 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:55.814413 | orchestrator | 2026-04-05 00:56:55 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:55.814462 | orchestrator | 2026-04-05 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:58.855193 | orchestrator | 2026-04-05 00:56:58 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:56:58.856219 | orchestrator | 2026-04-05 00:56:58 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:56:58.857387 | orchestrator | 2026-04-05 00:56:58 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:56:58.857415 | orchestrator | 2026-04-05 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:01.899506 | orchestrator | 2026-04-05 00:57:01 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:01.901018 | orchestrator | 2026-04-05 00:57:01 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:01.903310 | orchestrator | 2026-04-05 00:57:01 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:01.903387 | orchestrator | 2026-04-05 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:04.936091 | orchestrator | 2026-04-05 00:57:04 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:04.936744 | orchestrator | 2026-04-05 00:57:04 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:04.938261 | orchestrator | 2026-04-05 00:57:04 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:04.938304 | orchestrator | 2026-04-05 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:07.964854 | orchestrator | 2026-04-05 00:57:07 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:07.965896 | orchestrator | 2026-04-05 00:57:07 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:07.967658 | orchestrator | 2026-04-05 00:57:07 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:07.967706 | orchestrator | 2026-04-05 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:11.016874 | orchestrator | 2026-04-05 00:57:11 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:11.018632 | orchestrator | 2026-04-05 00:57:11 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:11.021250 | orchestrator | 2026-04-05 00:57:11 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:11.021363 | orchestrator | 2026-04-05 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:14.068882 | orchestrator | 2026-04-05 00:57:14 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:14.070106 | orchestrator | 2026-04-05 00:57:14 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:14.074472 | orchestrator | 2026-04-05 00:57:14 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:14.074526 | orchestrator | 2026-04-05 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:17.114297 | orchestrator | 2026-04-05 00:57:17 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:17.115857 | orchestrator | 2026-04-05 00:57:17 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:17.117028 | orchestrator | 2026-04-05 00:57:17 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:17.117256 | orchestrator | 2026-04-05 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:20.154638 | orchestrator | 2026-04-05 00:57:20 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:20.157835 | orchestrator | 2026-04-05 00:57:20 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:20.158112 | orchestrator | 2026-04-05 00:57:20 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:20.158489 | orchestrator | 2026-04-05 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:23.207045 | orchestrator | 2026-04-05 00:57:23 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:23.209466 | orchestrator | 2026-04-05 00:57:23 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:23.211670 | orchestrator | 2026-04-05 00:57:23 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:23.211717 | orchestrator | 2026-04-05 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:26.291120 | orchestrator | 2026-04-05 00:57:26 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:26.291290 | orchestrator | 2026-04-05 00:57:26 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:26.291306 | orchestrator | 2026-04-05 00:57:26 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:26.291316 | orchestrator | 2026-04-05 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:29.340478 | orchestrator | 2026-04-05 00:57:29 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:29.340557 | orchestrator | 2026-04-05 00:57:29 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:29.341897 | orchestrator | 2026-04-05 00:57:29 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:29.341936 | orchestrator | 2026-04-05 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:32.372277 | orchestrator | 2026-04-05 00:57:32 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:32.373230 | orchestrator | 2026-04-05 00:57:32 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:32.374629 | orchestrator | 2026-04-05 00:57:32 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:32.374699 | orchestrator | 2026-04-05 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:35.425814 | orchestrator | 2026-04-05 00:57:35 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:35.427691 | orchestrator | 2026-04-05 00:57:35 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:35.429390 | orchestrator | 2026-04-05 00:57:35 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:35.429450 | orchestrator | 2026-04-05 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:38.474502 | orchestrator | 2026-04-05 00:57:38 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:38.476671 | orchestrator | 2026-04-05 00:57:38 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:38.480381 | orchestrator | 2026-04-05 00:57:38 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:38.480452 | orchestrator | 2026-04-05 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:41.526694 | orchestrator | 2026-04-05 00:57:41 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:41.528406 | orchestrator | 2026-04-05 00:57:41 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:41.532117 | orchestrator | 2026-04-05 00:57:41 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:41.532203 | orchestrator | 2026-04-05 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:44.565758 | orchestrator | 2026-04-05 00:57:44 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state STARTED 2026-04-05 00:57:44.567278 | orchestrator | 2026-04-05 00:57:44 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:44.568053 | orchestrator | 2026-04-05 00:57:44 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:44.568075 | orchestrator | 2026-04-05 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:47.632443 | orchestrator | 2026-04-05 00:57:47.632693 | orchestrator | 2026-04-05 00:57:47 | INFO  | Task db5e6290-b6df-483a-b288-4bbfed62d4a9 is in state SUCCESS 2026-04-05 00:57:47.634623 | orchestrator | 2026-04-05 00:57:47.634678 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:57:47.634691 | orchestrator | 2026-04-05 00:57:47.634703 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:57:47.634791 | orchestrator | Sunday 05 April 2026 00:53:16 +0000 (0:00:00.291) 0:00:00.291 ********** 2026-04-05 00:57:47.634803 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.634816 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.634861 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.634872 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:57:47.634884 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:57:47.634895 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:57:47.634906 | orchestrator | 2026-04-05 00:57:47.634917 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:57:47.634996 | orchestrator | Sunday 05 April 2026 00:53:17 +0000 (0:00:00.657) 0:00:00.949 ********** 2026-04-05 00:57:47.635008 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-05 00:57:47.635023 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-05 00:57:47.635043 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-05 00:57:47.635061 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-05 00:57:47.635079 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-05 00:57:47.635165 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-05 00:57:47.635186 | orchestrator | 2026-04-05 00:57:47.635207 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-05 00:57:47.635229 | orchestrator | 2026-04-05 00:57:47.635250 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-05 00:57:47.635271 | orchestrator | Sunday 05 April 2026 00:53:18 +0000 (0:00:01.215) 0:00:02.165 ********** 2026-04-05 00:57:47.635316 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:57:47.635340 | orchestrator | 2026-04-05 00:57:47.635360 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-05 00:57:47.635379 | orchestrator | Sunday 05 April 2026 00:53:20 +0000 (0:00:01.371) 0:00:03.536 ********** 2026-04-05 00:57:47.635403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635529 | orchestrator | 2026-04-05 00:57:47.635558 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-05 00:57:47.635570 | orchestrator | Sunday 05 April 2026 00:53:21 +0000 (0:00:01.919) 0:00:05.456 ********** 2026-04-05 00:57:47.635581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635639 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635650 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635661 | orchestrator | 2026-04-05 00:57:47.635671 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-05 00:57:47.635682 | orchestrator | Sunday 05 April 2026 00:53:24 +0000 (0:00:02.621) 0:00:08.077 ********** 2026-04-05 00:57:47.635699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635782 | orchestrator | 2026-04-05 00:57:47.635793 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-05 00:57:47.635804 | orchestrator | Sunday 05 April 2026 00:53:26 +0000 (0:00:02.030) 0:00:10.107 ********** 2026-04-05 00:57:47.635815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635865 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635894 | orchestrator | 2026-04-05 00:57:47.635911 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-05 00:57:47.635922 | orchestrator | Sunday 05 April 2026 00:53:28 +0000 (0:00:02.161) 0:00:12.268 ********** 2026-04-05 00:57:47.635933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.635989 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.636000 | orchestrator | 2026-04-05 00:57:47.636011 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-05 00:57:47.636023 | orchestrator | Sunday 05 April 2026 00:53:30 +0000 (0:00:01.924) 0:00:14.193 ********** 2026-04-05 00:57:47.636039 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:57:47.636051 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.636062 | orchestrator | } 2026-04-05 00:57:47.636073 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:57:47.636083 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.636094 | orchestrator | } 2026-04-05 00:57:47.636105 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:57:47.636116 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.636157 | orchestrator | } 2026-04-05 00:57:47.636168 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 00:57:47.636179 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.636190 | orchestrator | } 2026-04-05 00:57:47.636201 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 00:57:47.636211 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.636222 | orchestrator | } 2026-04-05 00:57:47.636232 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 00:57:47.636243 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.636254 | orchestrator | } 2026-04-05 00:57:47.636264 | orchestrator | 2026-04-05 00:57:47.636275 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:57:47.636286 | orchestrator | Sunday 05 April 2026 00:53:31 +0000 (0:00:00.917) 0:00:15.110 ********** 2026-04-05 00:57:47.636298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.636318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.636330 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.636341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.636352 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.636363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.636374 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.636385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.636396 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:57:47.636407 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:57:47.636417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.636428 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:57:47.636439 | orchestrator | 2026-04-05 00:57:47.636450 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-05 00:57:47.636485 | orchestrator | Sunday 05 April 2026 00:53:33 +0000 (0:00:01.679) 0:00:16.790 ********** 2026-04-05 00:57:47.636497 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.636507 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:47.636518 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:47.636529 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:57:47.636539 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:57:47.636550 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:57:47.636560 | orchestrator | 2026-04-05 00:57:47.636571 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-05 00:57:47.636582 | orchestrator | Sunday 05 April 2026 00:53:36 +0000 (0:00:03.121) 0:00:19.912 ********** 2026-04-05 00:57:47.636598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-05 00:57:47.636609 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-05 00:57:47.636620 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-05 00:57:47.636631 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-05 00:57:47.636642 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-05 00:57:47.636652 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-05 00:57:47.636672 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:57:47.636683 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:57:47.636694 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:57:47.636704 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:57:47.636715 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:57:47.636726 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:57:47.636743 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 00:57:47.636757 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 00:57:47.636768 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 00:57:47.636779 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 00:57:47.636790 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 00:57:47.636800 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 00:57:47.636811 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:57:47.636823 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:57:47.636834 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:57:47.636844 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:57:47.636855 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:57:47.636872 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:57:47.636883 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:57:47.636894 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:57:47.636904 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:57:47.636915 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:57:47.636925 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:57:47.636936 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:57:47.636947 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:57:47.636958 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:57:47.636968 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:57:47.636979 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:57:47.636990 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:57:47.637000 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:57:47.637011 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 00:57:47.637022 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 00:57:47.637038 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 00:57:47.637049 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 00:57:47.637060 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 00:57:47.637070 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 00:57:47.637081 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-05 00:57:47.637093 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-05 00:57:47.637104 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-05 00:57:47.637115 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-05 00:57:47.637188 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-05 00:57:47.637208 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-05 00:57:47.637219 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 00:57:47.637230 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 00:57:47.637241 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 00:57:47.637252 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 00:57:47.637276 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 00:57:47.637287 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 00:57:47.637298 | orchestrator | 2026-04-05 00:57:47.637309 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:57:47.637320 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:23.034) 0:00:42.946 ********** 2026-04-05 00:57:47.637331 | orchestrator | 2026-04-05 00:57:47.637342 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:57:47.637352 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:00.051) 0:00:42.998 ********** 2026-04-05 00:57:47.637363 | orchestrator | 2026-04-05 00:57:47.637374 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:57:47.637385 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:00.149) 0:00:43.147 ********** 2026-04-05 00:57:47.637395 | orchestrator | 2026-04-05 00:57:47.637406 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:57:47.637416 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:00.051) 0:00:43.199 ********** 2026-04-05 00:57:47.637427 | orchestrator | 2026-04-05 00:57:47.637438 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:57:47.637448 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:00.051) 0:00:43.251 ********** 2026-04-05 00:57:47.637459 | orchestrator | 2026-04-05 00:57:47.637470 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:57:47.637481 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:00.053) 0:00:43.305 ********** 2026-04-05 00:57:47.637491 | orchestrator | 2026-04-05 00:57:47.637502 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-05 00:57:47.637512 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:00.053) 0:00:43.359 ********** 2026-04-05 00:57:47.637523 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.637534 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:57:47.637544 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:57:47.637555 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.637566 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.637576 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:57:47.637587 | orchestrator | 2026-04-05 00:57:47.637597 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-05 00:57:47.637608 | orchestrator | Sunday 05 April 2026 00:54:01 +0000 (0:00:01.781) 0:00:45.140 ********** 2026-04-05 00:57:47.637619 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.637630 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:47.637640 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:47.637651 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:57:47.637662 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:57:47.637672 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:57:47.637683 | orchestrator | 2026-04-05 00:57:47.637694 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-05 00:57:47.637704 | orchestrator | 2026-04-05 00:57:47.637714 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 00:57:47.637723 | orchestrator | Sunday 05 April 2026 00:54:06 +0000 (0:00:04.620) 0:00:49.760 ********** 2026-04-05 00:57:47.637745 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:57:47.637755 | orchestrator | 2026-04-05 00:57:47.637764 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 00:57:47.637774 | orchestrator | Sunday 05 April 2026 00:54:08 +0000 (0:00:02.160) 0:00:51.920 ********** 2026-04-05 00:57:47.637783 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:57:47.637800 | orchestrator | 2026-04-05 00:57:47.637809 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-05 00:57:47.637818 | orchestrator | Sunday 05 April 2026 00:54:09 +0000 (0:00:01.273) 0:00:53.194 ********** 2026-04-05 00:57:47.637828 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.637837 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.637847 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.637856 | orchestrator | 2026-04-05 00:57:47.637866 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-05 00:57:47.637875 | orchestrator | Sunday 05 April 2026 00:54:11 +0000 (0:00:01.555) 0:00:54.750 ********** 2026-04-05 00:57:47.637885 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.637894 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.637903 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.637913 | orchestrator | 2026-04-05 00:57:47.637922 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-05 00:57:47.637932 | orchestrator | Sunday 05 April 2026 00:54:11 +0000 (0:00:00.691) 0:00:55.441 ********** 2026-04-05 00:57:47.637941 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.637951 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.637960 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.637969 | orchestrator | 2026-04-05 00:57:47.637979 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-05 00:57:47.637995 | orchestrator | Sunday 05 April 2026 00:54:12 +0000 (0:00:00.995) 0:00:56.437 ********** 2026-04-05 00:57:47.638004 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.638014 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.638141 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.638152 | orchestrator | 2026-04-05 00:57:47.638162 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-05 00:57:47.638172 | orchestrator | Sunday 05 April 2026 00:54:13 +0000 (0:00:00.527) 0:00:56.965 ********** 2026-04-05 00:57:47.638181 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.638191 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.638200 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.638210 | orchestrator | 2026-04-05 00:57:47.638219 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-05 00:57:47.638229 | orchestrator | Sunday 05 April 2026 00:54:14 +0000 (0:00:00.702) 0:00:57.667 ********** 2026-04-05 00:57:47.638238 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638248 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638257 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638267 | orchestrator | 2026-04-05 00:57:47.638276 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-05 00:57:47.638286 | orchestrator | Sunday 05 April 2026 00:54:14 +0000 (0:00:00.803) 0:00:58.471 ********** 2026-04-05 00:57:47.638295 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638305 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638315 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638324 | orchestrator | 2026-04-05 00:57:47.638333 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-05 00:57:47.638343 | orchestrator | Sunday 05 April 2026 00:54:15 +0000 (0:00:00.426) 0:00:58.897 ********** 2026-04-05 00:57:47.638353 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638362 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638371 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638381 | orchestrator | 2026-04-05 00:57:47.638390 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-05 00:57:47.638400 | orchestrator | Sunday 05 April 2026 00:54:15 +0000 (0:00:00.302) 0:00:59.200 ********** 2026-04-05 00:57:47.638410 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638419 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638428 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638438 | orchestrator | 2026-04-05 00:57:47.638447 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-05 00:57:47.638466 | orchestrator | Sunday 05 April 2026 00:54:16 +0000 (0:00:00.455) 0:00:59.655 ********** 2026-04-05 00:57:47.638476 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638485 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638494 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638504 | orchestrator | 2026-04-05 00:57:47.638513 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-05 00:57:47.638523 | orchestrator | Sunday 05 April 2026 00:54:16 +0000 (0:00:00.600) 0:01:00.256 ********** 2026-04-05 00:57:47.638533 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638542 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638552 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638561 | orchestrator | 2026-04-05 00:57:47.638570 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-05 00:57:47.638580 | orchestrator | Sunday 05 April 2026 00:54:17 +0000 (0:00:00.484) 0:01:00.741 ********** 2026-04-05 00:57:47.638589 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638599 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638608 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638618 | orchestrator | 2026-04-05 00:57:47.638627 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-05 00:57:47.638637 | orchestrator | Sunday 05 April 2026 00:54:17 +0000 (0:00:00.422) 0:01:01.163 ********** 2026-04-05 00:57:47.638646 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638656 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638665 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638675 | orchestrator | 2026-04-05 00:57:47.638684 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-05 00:57:47.638694 | orchestrator | Sunday 05 April 2026 00:54:17 +0000 (0:00:00.289) 0:01:01.453 ********** 2026-04-05 00:57:47.638703 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638713 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638728 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638738 | orchestrator | 2026-04-05 00:57:47.638748 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-05 00:57:47.638757 | orchestrator | Sunday 05 April 2026 00:54:18 +0000 (0:00:00.527) 0:01:01.981 ********** 2026-04-05 00:57:47.638767 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638776 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638786 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638795 | orchestrator | 2026-04-05 00:57:47.638805 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-05 00:57:47.638814 | orchestrator | Sunday 05 April 2026 00:54:18 +0000 (0:00:00.292) 0:01:02.273 ********** 2026-04-05 00:57:47.638823 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638833 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638842 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638852 | orchestrator | 2026-04-05 00:57:47.638861 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-05 00:57:47.638871 | orchestrator | Sunday 05 April 2026 00:54:19 +0000 (0:00:00.297) 0:01:02.571 ********** 2026-04-05 00:57:47.638880 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.638890 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.638899 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.638909 | orchestrator | 2026-04-05 00:57:47.638918 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 00:57:47.638928 | orchestrator | Sunday 05 April 2026 00:54:19 +0000 (0:00:00.307) 0:01:02.879 ********** 2026-04-05 00:57:47.638937 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:57:47.638947 | orchestrator | 2026-04-05 00:57:47.638964 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-05 00:57:47.638974 | orchestrator | Sunday 05 April 2026 00:54:20 +0000 (0:00:00.788) 0:01:03.667 ********** 2026-04-05 00:57:47.638990 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.639000 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.639009 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.639018 | orchestrator | 2026-04-05 00:57:47.639028 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-05 00:57:47.639038 | orchestrator | Sunday 05 April 2026 00:54:20 +0000 (0:00:00.468) 0:01:04.135 ********** 2026-04-05 00:57:47.639047 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.639057 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.639066 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.639075 | orchestrator | 2026-04-05 00:57:47.639085 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-05 00:57:47.639095 | orchestrator | Sunday 05 April 2026 00:54:21 +0000 (0:00:00.435) 0:01:04.571 ********** 2026-04-05 00:57:47.639104 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.639114 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.639136 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.639146 | orchestrator | 2026-04-05 00:57:47.639156 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-05 00:57:47.639165 | orchestrator | Sunday 05 April 2026 00:54:21 +0000 (0:00:00.563) 0:01:05.134 ********** 2026-04-05 00:57:47.639175 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.639185 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.639194 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.639203 | orchestrator | 2026-04-05 00:57:47.639214 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-05 00:57:47.639223 | orchestrator | Sunday 05 April 2026 00:54:22 +0000 (0:00:00.479) 0:01:05.614 ********** 2026-04-05 00:57:47.639233 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.639242 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.639252 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.639261 | orchestrator | 2026-04-05 00:57:47.639270 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-05 00:57:47.639280 | orchestrator | Sunday 05 April 2026 00:54:22 +0000 (0:00:00.469) 0:01:06.084 ********** 2026-04-05 00:57:47.639289 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.639299 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.639308 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.639318 | orchestrator | 2026-04-05 00:57:47.639327 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-05 00:57:47.639337 | orchestrator | Sunday 05 April 2026 00:54:22 +0000 (0:00:00.358) 0:01:06.442 ********** 2026-04-05 00:57:47.639346 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.639356 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.639394 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.639404 | orchestrator | 2026-04-05 00:57:47.639414 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-05 00:57:47.639423 | orchestrator | Sunday 05 April 2026 00:54:23 +0000 (0:00:00.626) 0:01:07.069 ********** 2026-04-05 00:57:47.639433 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.639442 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.639452 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.639461 | orchestrator | 2026-04-05 00:57:47.639470 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-05 00:57:47.639480 | orchestrator | Sunday 05 April 2026 00:54:23 +0000 (0:00:00.380) 0:01:07.449 ********** 2026-04-05 00:57:47.639492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.639592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.639639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.639667 | orchestrator | 2026-04-05 00:57:47.639677 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-05 00:57:47.639687 | orchestrator | Sunday 05 April 2026 00:54:27 +0000 (0:00:03.176) 0:01:10.626 ********** 2026-04-05 00:57:47.639697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.639798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.639818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.639829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.639844 | orchestrator | 2026-04-05 00:57:47.639854 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-05 00:57:47.639864 | orchestrator | Sunday 05 April 2026 00:54:32 +0000 (0:00:05.442) 0:01:16.068 ********** 2026-04-05 00:57:47.639874 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-05 00:57:47.639884 | orchestrator | 2026-04-05 00:57:47.639894 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-05 00:57:47.639903 | orchestrator | Sunday 05 April 2026 00:54:33 +0000 (0:00:00.735) 0:01:16.804 ********** 2026-04-05 00:57:47.639913 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.639922 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:47.639936 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:47.639946 | orchestrator | 2026-04-05 00:57:47.639956 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-05 00:57:47.639965 | orchestrator | Sunday 05 April 2026 00:54:34 +0000 (0:00:00.689) 0:01:17.493 ********** 2026-04-05 00:57:47.639975 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.639984 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:47.639993 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:47.640003 | orchestrator | 2026-04-05 00:57:47.640012 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-05 00:57:47.640022 | orchestrator | Sunday 05 April 2026 00:54:35 +0000 (0:00:01.857) 0:01:19.351 ********** 2026-04-05 00:57:47.640031 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.640040 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:47.640050 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:47.640059 | orchestrator | 2026-04-05 00:57:47.640068 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-05 00:57:47.640078 | orchestrator | Sunday 05 April 2026 00:54:37 +0000 (0:00:02.024) 0:01:21.376 ********** 2026-04-05 00:57:47.640095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.640106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.640116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.640149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.640159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.640169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.640185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.640195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.640222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.640253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640263 | orchestrator | 2026-04-05 00:57:47.640273 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-05 00:57:47.640282 | orchestrator | Sunday 05 April 2026 00:54:42 +0000 (0:00:04.885) 0:01:26.261 ********** 2026-04-05 00:57:47.640292 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:57:47.640302 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.640311 | orchestrator | } 2026-04-05 00:57:47.640321 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:57:47.640330 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.640340 | orchestrator | } 2026-04-05 00:57:47.640349 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:57:47.640359 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.640368 | orchestrator | } 2026-04-05 00:57:47.640378 | orchestrator | 2026-04-05 00:57:47.640387 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:57:47.640397 | orchestrator | Sunday 05 April 2026 00:54:43 +0000 (0:00:00.609) 0:01:26.871 ********** 2026-04-05 00:57:47.640407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.640522 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.640532 | orchestrator | 2026-04-05 00:57:47.640542 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-05 00:57:47.640552 | orchestrator | Sunday 05 April 2026 00:54:47 +0000 (0:00:03.716) 0:01:30.587 ********** 2026-04-05 00:57:47.640562 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-05 00:57:47.640572 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-05 00:57:47.640581 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-05 00:57:47.640591 | orchestrator | 2026-04-05 00:57:47.640600 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-05 00:57:47.640610 | orchestrator | Sunday 05 April 2026 00:55:15 +0000 (0:00:27.942) 0:01:58.529 ********** 2026-04-05 00:57:47.640620 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:57:47.640629 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.640639 | orchestrator | } 2026-04-05 00:57:47.640648 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:57:47.640664 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.640674 | orchestrator | } 2026-04-05 00:57:47.640683 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:57:47.640693 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.640703 | orchestrator | } 2026-04-05 00:57:47.640712 | orchestrator | 2026-04-05 00:57:47.640728 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:57:47.640737 | orchestrator | Sunday 05 April 2026 00:55:15 +0000 (0:00:00.604) 0:01:59.134 ********** 2026-04-05 00:57:47.640747 | orchestrator | 2026-04-05 00:57:47.640757 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:57:47.640766 | orchestrator | Sunday 05 April 2026 00:55:15 +0000 (0:00:00.065) 0:01:59.200 ********** 2026-04-05 00:57:47.640776 | orchestrator | 2026-04-05 00:57:47.640785 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:57:47.640795 | orchestrator | Sunday 05 April 2026 00:55:15 +0000 (0:00:00.065) 0:01:59.266 ********** 2026-04-05 00:57:47.640805 | orchestrator | 2026-04-05 00:57:47.640814 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-05 00:57:47.640824 | orchestrator | Sunday 05 April 2026 00:55:15 +0000 (0:00:00.067) 0:01:59.333 ********** 2026-04-05 00:57:47.640833 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:47.640843 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:47.640853 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.640862 | orchestrator | 2026-04-05 00:57:47.640872 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-05 00:57:47.640881 | orchestrator | Sunday 05 April 2026 00:55:30 +0000 (0:00:14.924) 0:02:14.258 ********** 2026-04-05 00:57:47.640891 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.640900 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:47.640910 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:47.640919 | orchestrator | 2026-04-05 00:57:47.640929 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-04-05 00:57:47.640939 | orchestrator | Sunday 05 April 2026 00:55:46 +0000 (0:00:16.053) 0:02:30.311 ********** 2026-04-05 00:57:47.640948 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-05 00:57:47.640958 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-05 00:57:47.640967 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-05 00:57:47.640977 | orchestrator | 2026-04-05 00:57:47.640987 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-05 00:57:47.640996 | orchestrator | Sunday 05 April 2026 00:56:01 +0000 (0:00:14.416) 0:02:44.727 ********** 2026-04-05 00:57:47.641006 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.641015 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:47.641025 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:47.641034 | orchestrator | 2026-04-05 00:57:47.641044 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-05 00:57:47.641053 | orchestrator | Sunday 05 April 2026 00:56:15 +0000 (0:00:14.666) 0:02:59.394 ********** 2026-04-05 00:57:47.641063 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.641073 | orchestrator | 2026-04-05 00:57:47.641082 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-05 00:57:47.641092 | orchestrator | Sunday 05 April 2026 00:56:16 +0000 (0:00:00.182) 0:02:59.576 ********** 2026-04-05 00:57:47.641102 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.641111 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.641143 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.641154 | orchestrator | 2026-04-05 00:57:47.641164 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-05 00:57:47.641174 | orchestrator | Sunday 05 April 2026 00:56:17 +0000 (0:00:01.110) 0:03:00.687 ********** 2026-04-05 00:57:47.641183 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.641193 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.641202 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.641218 | orchestrator | 2026-04-05 00:57:47.641228 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-05 00:57:47.641237 | orchestrator | Sunday 05 April 2026 00:56:17 +0000 (0:00:00.680) 0:03:01.367 ********** 2026-04-05 00:57:47.641247 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.641257 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.641266 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.641276 | orchestrator | 2026-04-05 00:57:47.641285 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-05 00:57:47.641295 | orchestrator | Sunday 05 April 2026 00:56:18 +0000 (0:00:00.854) 0:03:02.222 ********** 2026-04-05 00:57:47.641304 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.641314 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.641324 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.641334 | orchestrator | 2026-04-05 00:57:47.641343 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-05 00:57:47.641353 | orchestrator | Sunday 05 April 2026 00:56:19 +0000 (0:00:00.689) 0:03:02.911 ********** 2026-04-05 00:57:47.641362 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.641372 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.641381 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.641391 | orchestrator | 2026-04-05 00:57:47.641400 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-05 00:57:47.641410 | orchestrator | Sunday 05 April 2026 00:56:20 +0000 (0:00:01.254) 0:03:04.165 ********** 2026-04-05 00:57:47.641420 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.641429 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.641439 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.641448 | orchestrator | 2026-04-05 00:57:47.641457 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-05 00:57:47.641467 | orchestrator | Sunday 05 April 2026 00:56:21 +0000 (0:00:00.841) 0:03:05.007 ********** 2026-04-05 00:57:47.641477 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-05 00:57:47.641486 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-05 00:57:47.641496 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-05 00:57:47.641505 | orchestrator | 2026-04-05 00:57:47.641515 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-05 00:57:47.641524 | orchestrator | Sunday 05 April 2026 00:56:22 +0000 (0:00:00.896) 0:03:05.904 ********** 2026-04-05 00:57:47.641534 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.641544 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.641553 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.641562 | orchestrator | 2026-04-05 00:57:47.641572 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-05 00:57:47.641588 | orchestrator | Sunday 05 April 2026 00:56:22 +0000 (0:00:00.308) 0:03:06.212 ********** 2026-04-05 00:57:47.641599 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641609 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641656 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641673 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641683 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641698 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.641725 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.641745 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641761 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.641781 | orchestrator | 2026-04-05 00:57:47.641791 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-05 00:57:47.641800 | orchestrator | Sunday 05 April 2026 00:56:26 +0000 (0:00:03.289) 0:03:09.502 ********** 2026-04-05 00:57:47.641811 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641826 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641836 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641863 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.641911 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.641936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.641952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.641962 | orchestrator | 2026-04-05 00:57:47.641972 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-05 00:57:47.641982 | orchestrator | Sunday 05 April 2026 00:56:32 +0000 (0:00:06.818) 0:03:16.320 ********** 2026-04-05 00:57:47.641992 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-05 00:57:47.642001 | orchestrator | 2026-04-05 00:57:47.642060 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-05 00:57:47.642073 | orchestrator | Sunday 05 April 2026 00:56:33 +0000 (0:00:00.650) 0:03:16.970 ********** 2026-04-05 00:57:47.642083 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.642092 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.642102 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.642111 | orchestrator | 2026-04-05 00:57:47.642137 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-05 00:57:47.642147 | orchestrator | Sunday 05 April 2026 00:56:34 +0000 (0:00:00.642) 0:03:17.613 ********** 2026-04-05 00:57:47.642157 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.642166 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.642176 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.642185 | orchestrator | 2026-04-05 00:57:47.642195 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-05 00:57:47.642205 | orchestrator | Sunday 05 April 2026 00:56:35 +0000 (0:00:01.846) 0:03:19.460 ********** 2026-04-05 00:57:47.642214 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.642224 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.642233 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.642243 | orchestrator | 2026-04-05 00:57:47.642252 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-05 00:57:47.642262 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:01.347) 0:03:20.807 ********** 2026-04-05 00:57:47.642272 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.642282 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.642293 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.642308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.642319 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.642357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.642368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.642379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.642399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642409 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.642424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642434 | orchestrator | 2026-04-05 00:57:47.642444 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-05 00:57:47.642454 | orchestrator | Sunday 05 April 2026 00:56:41 +0000 (0:00:03.856) 0:03:24.664 ********** 2026-04-05 00:57:47.642470 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 00:57:47.642479 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.642489 | orchestrator | } 2026-04-05 00:57:47.642499 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:57:47.642508 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.642518 | orchestrator | } 2026-04-05 00:57:47.642527 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:57:47.642536 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.642546 | orchestrator | } 2026-04-05 00:57:47.642555 | orchestrator | 2026-04-05 00:57:47.642565 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:57:47.642575 | orchestrator | Sunday 05 April 2026 00:56:41 +0000 (0:00:00.361) 0:03:25.026 ********** 2026-04-05 00:57:47.642596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:57:47.642706 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:57:47.642716 | orchestrator | 2026-04-05 00:57:47.642726 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-05 00:57:47.642736 | orchestrator | Sunday 05 April 2026 00:56:44 +0000 (0:00:02.929) 0:03:27.955 ********** 2026-04-05 00:57:47.642745 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-05 00:57:47.642755 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-05 00:57:47.642764 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-05 00:57:47.642774 | orchestrator | 2026-04-05 00:57:47.642783 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-05 00:57:47.642793 | orchestrator | Sunday 05 April 2026 00:57:11 +0000 (0:00:26.757) 0:03:54.713 ********** 2026-04-05 00:57:47.642803 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 00:57:47.642812 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.642822 | orchestrator | } 2026-04-05 00:57:47.642831 | orchestrator | ok: [testbed-node-1] => { 2026-04-05 00:57:47.642840 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.642850 | orchestrator | } 2026-04-05 00:57:47.642860 | orchestrator | ok: [testbed-node-2] => { 2026-04-05 00:57:47.642869 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:47.642878 | orchestrator | } 2026-04-05 00:57:47.642888 | orchestrator | 2026-04-05 00:57:47.642898 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:57:47.642907 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:00.829) 0:03:55.542 ********** 2026-04-05 00:57:47.642917 | orchestrator | 2026-04-05 00:57:47.642926 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:57:47.642936 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:00.102) 0:03:55.645 ********** 2026-04-05 00:57:47.642945 | orchestrator | 2026-04-05 00:57:47.642955 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:57:47.642971 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:00.094) 0:03:55.739 ********** 2026-04-05 00:57:47.642980 | orchestrator | 2026-04-05 00:57:47.642990 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-05 00:57:47.642999 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:00.096) 0:03:55.836 ********** 2026-04-05 00:57:47.643009 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:47.643018 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:47.643027 | orchestrator | 2026-04-05 00:57:47.643037 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-05 00:57:47.643046 | orchestrator | Sunday 05 April 2026 00:57:25 +0000 (0:00:13.629) 0:04:09.465 ********** 2026-04-05 00:57:47.643056 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:47.643065 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:47.643075 | orchestrator | 2026-04-05 00:57:47.643084 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-05 00:57:47.643094 | orchestrator | Sunday 05 April 2026 00:57:40 +0000 (0:00:14.738) 0:04:24.203 ********** 2026-04-05 00:57:47.643103 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:47.643113 | orchestrator | 2026-04-05 00:57:47.643143 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-05 00:57:47.643153 | orchestrator | Sunday 05 April 2026 00:57:40 +0000 (0:00:00.178) 0:04:24.382 ********** 2026-04-05 00:57:47.643163 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.643172 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.643182 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.643191 | orchestrator | 2026-04-05 00:57:47.643201 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-05 00:57:47.643210 | orchestrator | Sunday 05 April 2026 00:57:41 +0000 (0:00:00.915) 0:04:25.298 ********** 2026-04-05 00:57:47.643220 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.643229 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.643239 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.643248 | orchestrator | 2026-04-05 00:57:47.643258 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-05 00:57:47.643268 | orchestrator | Sunday 05 April 2026 00:57:42 +0000 (0:00:00.665) 0:04:25.964 ********** 2026-04-05 00:57:47.643277 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.643287 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.643296 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.643306 | orchestrator | 2026-04-05 00:57:47.643315 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-05 00:57:47.643325 | orchestrator | Sunday 05 April 2026 00:57:43 +0000 (0:00:00.822) 0:04:26.786 ********** 2026-04-05 00:57:47.643334 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:47.643344 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:47.643353 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:47.643363 | orchestrator | 2026-04-05 00:57:47.643372 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-05 00:57:47.643382 | orchestrator | Sunday 05 April 2026 00:57:43 +0000 (0:00:00.580) 0:04:27.367 ********** 2026-04-05 00:57:47.643391 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.643407 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.643417 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.643427 | orchestrator | 2026-04-05 00:57:47.643436 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-05 00:57:47.643446 | orchestrator | Sunday 05 April 2026 00:57:44 +0000 (0:00:00.884) 0:04:28.251 ********** 2026-04-05 00:57:47.643455 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:47.643464 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:47.643474 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:47.643483 | orchestrator | 2026-04-05 00:57:47.643493 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-05 00:57:47.643503 | orchestrator | Sunday 05 April 2026 00:57:45 +0000 (0:00:01.078) 0:04:29.329 ********** 2026-04-05 00:57:47.643518 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-05 00:57:47.643528 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-05 00:57:47.643537 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-05 00:57:47.643547 | orchestrator | 2026-04-05 00:57:47.643556 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:57:47.643566 | orchestrator | testbed-node-0 : ok=64  changed=26  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-05 00:57:47.643576 | orchestrator | testbed-node-1 : ok=62  changed=27  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-04-05 00:57:47.643586 | orchestrator | testbed-node-2 : ok=62  changed=27  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-04-05 00:57:47.643596 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:57:47.643605 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:57:47.643615 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:57:47.643624 | orchestrator | 2026-04-05 00:57:47.643634 | orchestrator | 2026-04-05 00:57:47.643643 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:57:47.643653 | orchestrator | Sunday 05 April 2026 00:57:46 +0000 (0:00:01.103) 0:04:30.433 ********** 2026-04-05 00:57:47.643662 | orchestrator | =============================================================================== 2026-04-05 00:57:47.643672 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 30.79s 2026-04-05 00:57:47.643681 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 28.55s 2026-04-05 00:57:47.643691 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 27.94s 2026-04-05 00:57:47.643700 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 26.76s 2026-04-05 00:57:47.643710 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.03s 2026-04-05 00:57:47.643719 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.67s 2026-04-05 00:57:47.643729 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 14.42s 2026-04-05 00:57:47.643738 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.82s 2026-04-05 00:57:47.643748 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.44s 2026-04-05 00:57:47.643757 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.89s 2026-04-05 00:57:47.643766 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 4.62s 2026-04-05 00:57:47.643776 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 3.86s 2026-04-05 00:57:47.643791 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.72s 2026-04-05 00:57:47.643800 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.29s 2026-04-05 00:57:47.643810 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.18s 2026-04-05 00:57:47.643819 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.12s 2026-04-05 00:57:47.643829 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.93s 2026-04-05 00:57:47.643838 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.62s 2026-04-05 00:57:47.643848 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.16s 2026-04-05 00:57:47.643857 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 2.16s 2026-04-05 00:57:47.643873 | orchestrator | 2026-04-05 00:57:47 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:47.643882 | orchestrator | 2026-04-05 00:57:47 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:47.643892 | orchestrator | 2026-04-05 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:50.675566 | orchestrator | 2026-04-05 00:57:50 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:50.676418 | orchestrator | 2026-04-05 00:57:50 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:50.676494 | orchestrator | 2026-04-05 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:53.737851 | orchestrator | 2026-04-05 00:57:53 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:53.740739 | orchestrator | 2026-04-05 00:57:53 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:53.741064 | orchestrator | 2026-04-05 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:56.784512 | orchestrator | 2026-04-05 00:57:56 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:56.785163 | orchestrator | 2026-04-05 00:57:56 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:56.785198 | orchestrator | 2026-04-05 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:59.841419 | orchestrator | 2026-04-05 00:57:59 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:57:59.841899 | orchestrator | 2026-04-05 00:57:59 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:57:59.841931 | orchestrator | 2026-04-05 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:02.894382 | orchestrator | 2026-04-05 00:58:02 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:02.895940 | orchestrator | 2026-04-05 00:58:02 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:02.896013 | orchestrator | 2026-04-05 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:05.946415 | orchestrator | 2026-04-05 00:58:05 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:05.946823 | orchestrator | 2026-04-05 00:58:05 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:05.946859 | orchestrator | 2026-04-05 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:08.980600 | orchestrator | 2026-04-05 00:58:08 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:08.983807 | orchestrator | 2026-04-05 00:58:08 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:08.983891 | orchestrator | 2026-04-05 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:12.042352 | orchestrator | 2026-04-05 00:58:12 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:12.047304 | orchestrator | 2026-04-05 00:58:12 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:12.047435 | orchestrator | 2026-04-05 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:15.078972 | orchestrator | 2026-04-05 00:58:15 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:15.079232 | orchestrator | 2026-04-05 00:58:15 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:15.079291 | orchestrator | 2026-04-05 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:18.121994 | orchestrator | 2026-04-05 00:58:18 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:18.124521 | orchestrator | 2026-04-05 00:58:18 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:18.124587 | orchestrator | 2026-04-05 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:21.166140 | orchestrator | 2026-04-05 00:58:21 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:21.169155 | orchestrator | 2026-04-05 00:58:21 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:21.169241 | orchestrator | 2026-04-05 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:24.212796 | orchestrator | 2026-04-05 00:58:24 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:24.212949 | orchestrator | 2026-04-05 00:58:24 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:24.213232 | orchestrator | 2026-04-05 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:27.262258 | orchestrator | 2026-04-05 00:58:27 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:27.262344 | orchestrator | 2026-04-05 00:58:27 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:27.262357 | orchestrator | 2026-04-05 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:30.317263 | orchestrator | 2026-04-05 00:58:30 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:30.321263 | orchestrator | 2026-04-05 00:58:30 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:30.321352 | orchestrator | 2026-04-05 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:33.406234 | orchestrator | 2026-04-05 00:58:33 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:33.408466 | orchestrator | 2026-04-05 00:58:33 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:33.408554 | orchestrator | 2026-04-05 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:36.465937 | orchestrator | 2026-04-05 00:58:36 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:36.467896 | orchestrator | 2026-04-05 00:58:36 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:36.468396 | orchestrator | 2026-04-05 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:39.517223 | orchestrator | 2026-04-05 00:58:39 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:39.517484 | orchestrator | 2026-04-05 00:58:39 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:39.517513 | orchestrator | 2026-04-05 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:42.563185 | orchestrator | 2026-04-05 00:58:42 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:42.565245 | orchestrator | 2026-04-05 00:58:42 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:42.565295 | orchestrator | 2026-04-05 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:45.613016 | orchestrator | 2026-04-05 00:58:45 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:45.614263 | orchestrator | 2026-04-05 00:58:45 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:45.614958 | orchestrator | 2026-04-05 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:48.667146 | orchestrator | 2026-04-05 00:58:48 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:48.670206 | orchestrator | 2026-04-05 00:58:48 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:48.670270 | orchestrator | 2026-04-05 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:51.704911 | orchestrator | 2026-04-05 00:58:51 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:51.710064 | orchestrator | 2026-04-05 00:58:51 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:51.710140 | orchestrator | 2026-04-05 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:54.757828 | orchestrator | 2026-04-05 00:58:54 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:54.758308 | orchestrator | 2026-04-05 00:58:54 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:54.758346 | orchestrator | 2026-04-05 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:57.812744 | orchestrator | 2026-04-05 00:58:57 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:58:57.814299 | orchestrator | 2026-04-05 00:58:57 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:58:57.814426 | orchestrator | 2026-04-05 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:00.869941 | orchestrator | 2026-04-05 00:59:00 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:00.872141 | orchestrator | 2026-04-05 00:59:00 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:00.872176 | orchestrator | 2026-04-05 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:03.920741 | orchestrator | 2026-04-05 00:59:03 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:03.920867 | orchestrator | 2026-04-05 00:59:03 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:03.920889 | orchestrator | 2026-04-05 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:06.971864 | orchestrator | 2026-04-05 00:59:06 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:06.976821 | orchestrator | 2026-04-05 00:59:06 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:06.976883 | orchestrator | 2026-04-05 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:10.022798 | orchestrator | 2026-04-05 00:59:10 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:10.023872 | orchestrator | 2026-04-05 00:59:10 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:10.023923 | orchestrator | 2026-04-05 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:13.068555 | orchestrator | 2026-04-05 00:59:13 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:13.072220 | orchestrator | 2026-04-05 00:59:13 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:13.072282 | orchestrator | 2026-04-05 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:16.113307 | orchestrator | 2026-04-05 00:59:16 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:16.115013 | orchestrator | 2026-04-05 00:59:16 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:16.115188 | orchestrator | 2026-04-05 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:19.154788 | orchestrator | 2026-04-05 00:59:19 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:19.156029 | orchestrator | 2026-04-05 00:59:19 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:19.156127 | orchestrator | 2026-04-05 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:22.207451 | orchestrator | 2026-04-05 00:59:22 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:22.208727 | orchestrator | 2026-04-05 00:59:22 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:22.208767 | orchestrator | 2026-04-05 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:25.255990 | orchestrator | 2026-04-05 00:59:25 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:25.256276 | orchestrator | 2026-04-05 00:59:25 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:25.256302 | orchestrator | 2026-04-05 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:28.302299 | orchestrator | 2026-04-05 00:59:28 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:28.303676 | orchestrator | 2026-04-05 00:59:28 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:28.303777 | orchestrator | 2026-04-05 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:31.346913 | orchestrator | 2026-04-05 00:59:31 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:31.349511 | orchestrator | 2026-04-05 00:59:31 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:31.350138 | orchestrator | 2026-04-05 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:34.398240 | orchestrator | 2026-04-05 00:59:34 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:34.400699 | orchestrator | 2026-04-05 00:59:34 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:34.400744 | orchestrator | 2026-04-05 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:37.455218 | orchestrator | 2026-04-05 00:59:37 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:37.456392 | orchestrator | 2026-04-05 00:59:37 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:37.456494 | orchestrator | 2026-04-05 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:40.509567 | orchestrator | 2026-04-05 00:59:40 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:40.512307 | orchestrator | 2026-04-05 00:59:40 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:40.512400 | orchestrator | 2026-04-05 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:43.557370 | orchestrator | 2026-04-05 00:59:43 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:43.557464 | orchestrator | 2026-04-05 00:59:43 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:43.557510 | orchestrator | 2026-04-05 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:46.615075 | orchestrator | 2026-04-05 00:59:46 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state STARTED 2026-04-05 00:59:46.619752 | orchestrator | 2026-04-05 00:59:46 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:46.620494 | orchestrator | 2026-04-05 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:49.670678 | orchestrator | 2026-04-05 00:59:49 | INFO  | Task 90fc6a4f-e939-4ed3-8492-179f7b47bedf is in state SUCCESS 2026-04-05 00:59:49.672581 | orchestrator | 2026-04-05 00:59:49.672644 | orchestrator | 2026-04-05 00:59:49.672672 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:59:49.672912 | orchestrator | 2026-04-05 00:59:49.672938 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:59:49.672959 | orchestrator | Sunday 05 April 2026 00:51:53 +0000 (0:00:00.626) 0:00:00.626 ********** 2026-04-05 00:59:49.672980 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.673256 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.673277 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.673295 | orchestrator | 2026-04-05 00:59:49.673318 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:59:49.673339 | orchestrator | Sunday 05 April 2026 00:51:53 +0000 (0:00:00.491) 0:00:01.118 ********** 2026-04-05 00:59:49.673362 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-05 00:59:49.673384 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-05 00:59:49.673400 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-05 00:59:49.673413 | orchestrator | 2026-04-05 00:59:49.673427 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-05 00:59:49.673440 | orchestrator | 2026-04-05 00:59:49.673561 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-05 00:59:49.673677 | orchestrator | Sunday 05 April 2026 00:51:54 +0000 (0:00:00.616) 0:00:01.735 ********** 2026-04-05 00:59:49.673708 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.673720 | orchestrator | 2026-04-05 00:59:49.673742 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-05 00:59:49.673753 | orchestrator | Sunday 05 April 2026 00:51:55 +0000 (0:00:01.011) 0:00:02.746 ********** 2026-04-05 00:59:49.673764 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.673775 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.673786 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.673797 | orchestrator | 2026-04-05 00:59:49.673808 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-05 00:59:49.673819 | orchestrator | Sunday 05 April 2026 00:51:57 +0000 (0:00:02.115) 0:00:04.862 ********** 2026-04-05 00:59:49.673830 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.673841 | orchestrator | 2026-04-05 00:59:49.673852 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-05 00:59:49.673924 | orchestrator | Sunday 05 April 2026 00:51:58 +0000 (0:00:01.106) 0:00:05.968 ********** 2026-04-05 00:59:49.673938 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.673950 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.673961 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.673980 | orchestrator | 2026-04-05 00:59:49.673999 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-05 00:59:49.674137 | orchestrator | Sunday 05 April 2026 00:52:00 +0000 (0:00:01.656) 0:00:07.624 ********** 2026-04-05 00:59:49.674159 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:59:49.674252 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:59:49.674470 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:59:49.674501 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:59:49.674545 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:59:49.674580 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:59:49.674602 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 00:59:49.674624 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 00:59:49.674686 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 00:59:49.674699 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 00:59:49.674709 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 00:59:49.674720 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 00:59:49.674731 | orchestrator | 2026-04-05 00:59:49.674741 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 00:59:49.674752 | orchestrator | Sunday 05 April 2026 00:52:03 +0000 (0:00:03.557) 0:00:11.181 ********** 2026-04-05 00:59:49.674763 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-05 00:59:49.674775 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-05 00:59:49.674786 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-05 00:59:49.674796 | orchestrator | 2026-04-05 00:59:49.674807 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 00:59:49.674853 | orchestrator | Sunday 05 April 2026 00:52:05 +0000 (0:00:01.408) 0:00:12.590 ********** 2026-04-05 00:59:49.674865 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-05 00:59:49.674901 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-05 00:59:49.674913 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-05 00:59:49.674939 | orchestrator | 2026-04-05 00:59:49.674951 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 00:59:49.674993 | orchestrator | Sunday 05 April 2026 00:52:07 +0000 (0:00:02.771) 0:00:15.361 ********** 2026-04-05 00:59:49.675231 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-05 00:59:49.675298 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.675342 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-05 00:59:49.675354 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.675365 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-05 00:59:49.675376 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.675387 | orchestrator | 2026-04-05 00:59:49.675398 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-05 00:59:49.675410 | orchestrator | Sunday 05 April 2026 00:52:09 +0000 (0:00:01.790) 0:00:17.151 ********** 2026-04-05 00:59:49.675476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.675497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.675535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.675544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.675553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.675568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.675577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.675586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.675602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.675610 | orchestrator | 2026-04-05 00:59:49.675618 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-05 00:59:49.675626 | orchestrator | Sunday 05 April 2026 00:52:12 +0000 (0:00:03.202) 0:00:20.354 ********** 2026-04-05 00:59:49.675634 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.675642 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.675650 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.675658 | orchestrator | 2026-04-05 00:59:49.675665 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-05 00:59:49.675673 | orchestrator | Sunday 05 April 2026 00:52:15 +0000 (0:00:02.975) 0:00:23.330 ********** 2026-04-05 00:59:49.675681 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-05 00:59:49.675689 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-05 00:59:49.675697 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-05 00:59:49.675705 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-05 00:59:49.675717 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-05 00:59:49.675725 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-05 00:59:49.675733 | orchestrator | 2026-04-05 00:59:49.675741 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-05 00:59:49.675748 | orchestrator | Sunday 05 April 2026 00:52:19 +0000 (0:00:04.062) 0:00:27.393 ********** 2026-04-05 00:59:49.675756 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.675764 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.675803 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.675812 | orchestrator | 2026-04-05 00:59:49.675819 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-05 00:59:49.675827 | orchestrator | Sunday 05 April 2026 00:52:22 +0000 (0:00:02.419) 0:00:29.812 ********** 2026-04-05 00:59:49.675835 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.675939 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.675948 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.675956 | orchestrator | 2026-04-05 00:59:49.675964 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-05 00:59:49.675972 | orchestrator | Sunday 05 April 2026 00:52:24 +0000 (0:00:02.091) 0:00:31.904 ********** 2026-04-05 00:59:49.675980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.676064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.676091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.676102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:59:49.676111 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.676119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.676133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.676141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.676150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:59:49.676164 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.676239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.676250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.676258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.676271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:59:49.676290 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.676299 | orchestrator | 2026-04-05 00:59:49.676307 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-05 00:59:49.676315 | orchestrator | Sunday 05 April 2026 00:52:26 +0000 (0:00:02.525) 0:00:34.429 ********** 2026-04-05 00:59:49.676323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.676388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:59:49.676400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.676436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.676444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:59:49.676461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641', '__omit_place_holder__89b2b27112b2da96f66ff91877beeb816608b641'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:59:49.676470 | orchestrator | 2026-04-05 00:59:49.676478 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-05 00:59:49.676486 | orchestrator | Sunday 05 April 2026 00:52:33 +0000 (0:00:06.387) 0:00:40.817 ********** 2026-04-05 00:59:49.676499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.676595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.676629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.676638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.676651 | orchestrator | 2026-04-05 00:59:49.676659 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-05 00:59:49.676667 | orchestrator | Sunday 05 April 2026 00:52:38 +0000 (0:00:05.647) 0:00:46.464 ********** 2026-04-05 00:59:49.676676 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 00:59:49.676684 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 00:59:49.676692 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 00:59:49.676700 | orchestrator | 2026-04-05 00:59:49.676708 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-05 00:59:49.676716 | orchestrator | Sunday 05 April 2026 00:52:44 +0000 (0:00:05.139) 0:00:51.603 ********** 2026-04-05 00:59:49.676723 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 00:59:49.676732 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 00:59:49.676740 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 00:59:49.676748 | orchestrator | 2026-04-05 00:59:49.676790 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-05 00:59:49.676822 | orchestrator | Sunday 05 April 2026 00:52:50 +0000 (0:00:06.478) 0:00:58.082 ********** 2026-04-05 00:59:49.676831 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.676839 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.676847 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.676855 | orchestrator | 2026-04-05 00:59:49.676863 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-05 00:59:49.676916 | orchestrator | Sunday 05 April 2026 00:52:52 +0000 (0:00:01.551) 0:00:59.633 ********** 2026-04-05 00:59:49.676926 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 00:59:49.676935 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 00:59:49.676943 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 00:59:49.676951 | orchestrator | 2026-04-05 00:59:49.676958 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-05 00:59:49.676966 | orchestrator | Sunday 05 April 2026 00:52:54 +0000 (0:00:02.572) 0:01:02.205 ********** 2026-04-05 00:59:49.676974 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 00:59:49.676982 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 00:59:49.676990 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 00:59:49.676998 | orchestrator | 2026-04-05 00:59:49.677024 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-05 00:59:49.677034 | orchestrator | Sunday 05 April 2026 00:52:57 +0000 (0:00:02.359) 0:01:04.564 ********** 2026-04-05 00:59:49.677042 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.677050 | orchestrator | 2026-04-05 00:59:49.677057 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-05 00:59:49.677065 | orchestrator | Sunday 05 April 2026 00:52:57 +0000 (0:00:00.866) 0:01:05.431 ********** 2026-04-05 00:59:49.677073 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-05 00:59:49.677087 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-05 00:59:49.677095 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-05 00:59:49.677103 | orchestrator | 2026-04-05 00:59:49.677111 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-05 00:59:49.677118 | orchestrator | Sunday 05 April 2026 00:53:00 +0000 (0:00:02.945) 0:01:08.377 ********** 2026-04-05 00:59:49.677135 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-05 00:59:49.677144 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-05 00:59:49.677152 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-05 00:59:49.677160 | orchestrator | 2026-04-05 00:59:49.677168 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-05 00:59:49.677175 | orchestrator | Sunday 05 April 2026 00:53:03 +0000 (0:00:02.281) 0:01:10.658 ********** 2026-04-05 00:59:49.677183 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.677206 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.677214 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.677222 | orchestrator | 2026-04-05 00:59:49.677230 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-05 00:59:49.677238 | orchestrator | Sunday 05 April 2026 00:53:03 +0000 (0:00:00.694) 0:01:11.353 ********** 2026-04-05 00:59:49.677246 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.677253 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.677261 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.677269 | orchestrator | 2026-04-05 00:59:49.677277 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-05 00:59:49.677285 | orchestrator | Sunday 05 April 2026 00:53:04 +0000 (0:00:00.779) 0:01:12.132 ********** 2026-04-05 00:59:49.677320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.677337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.677346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.677354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.677369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.677382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.677390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.677398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.677411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.677419 | orchestrator | 2026-04-05 00:59:49.677427 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-05 00:59:49.677436 | orchestrator | Sunday 05 April 2026 00:53:09 +0000 (0:00:04.948) 0:01:17.081 ********** 2026-04-05 00:59:49.677444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.677459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.677468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.677476 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.677488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.677497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.677505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.677513 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.677527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.677550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.677564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.677572 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.677580 | orchestrator | 2026-04-05 00:59:49.677588 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-05 00:59:49.677596 | orchestrator | Sunday 05 April 2026 00:53:10 +0000 (0:00:00.649) 0:01:17.731 ********** 2026-04-05 00:59:49.677608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.677617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.677625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.677646 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.677660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.677674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.677682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.677690 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.677698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.677711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.677727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.677735 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.677743 | orchestrator | 2026-04-05 00:59:49.677751 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-05 00:59:49.677759 | orchestrator | Sunday 05 April 2026 00:53:11 +0000 (0:00:01.216) 0:01:18.947 ********** 2026-04-05 00:59:49.677767 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 00:59:49.677774 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 00:59:49.677782 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 00:59:49.677790 | orchestrator | 2026-04-05 00:59:49.677853 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-05 00:59:49.677861 | orchestrator | Sunday 05 April 2026 00:53:13 +0000 (0:00:01.825) 0:01:20.772 ********** 2026-04-05 00:59:49.677875 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 00:59:49.677888 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 00:59:49.677911 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 00:59:49.677919 | orchestrator | 2026-04-05 00:59:49.677927 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-05 00:59:49.677935 | orchestrator | Sunday 05 April 2026 00:53:14 +0000 (0:00:01.754) 0:01:22.527 ********** 2026-04-05 00:59:49.677942 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 00:59:49.677950 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 00:59:49.677958 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 00:59:49.677966 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 00:59:49.677974 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.677982 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 00:59:49.677990 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.677998 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 00:59:49.678090 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.678103 | orchestrator | 2026-04-05 00:59:49.678111 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-05 00:59:49.678119 | orchestrator | Sunday 05 April 2026 00:53:16 +0000 (0:00:01.053) 0:01:23.581 ********** 2026-04-05 00:59:49.678128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.678142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.678151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.678159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.678184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.678193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.678201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.678209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.678221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.678230 | orchestrator | 2026-04-05 00:59:49.678237 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-05 00:59:49.678245 | orchestrator | Sunday 05 April 2026 00:53:18 +0000 (0:00:02.904) 0:01:26.486 ********** 2026-04-05 00:59:49.678253 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:59:49.678262 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:59:49.678270 | orchestrator | } 2026-04-05 00:59:49.678278 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:59:49.678285 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:59:49.678293 | orchestrator | } 2026-04-05 00:59:49.678307 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:59:49.678315 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:59:49.678323 | orchestrator | } 2026-04-05 00:59:49.678330 | orchestrator | 2026-04-05 00:59:49.678338 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:59:49.678346 | orchestrator | Sunday 05 April 2026 00:53:19 +0000 (0:00:00.679) 0:01:27.166 ********** 2026-04-05 00:59:49.678354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.678371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.678379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.678387 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.678395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.678404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.678416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.678431 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.678439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.678447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.678461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.678469 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.678477 | orchestrator | 2026-04-05 00:59:49.678485 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-05 00:59:49.678493 | orchestrator | Sunday 05 April 2026 00:53:21 +0000 (0:00:01.611) 0:01:28.777 ********** 2026-04-05 00:59:49.678501 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.678509 | orchestrator | 2026-04-05 00:59:49.678517 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-05 00:59:49.678525 | orchestrator | Sunday 05 April 2026 00:53:22 +0000 (0:00:00.991) 0:01:29.769 ********** 2026-04-05 00:59:49.678535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.678550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.678565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.678595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.678604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.678642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.678655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678672 | orchestrator | 2026-04-05 00:59:49.678680 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-05 00:59:49.678688 | orchestrator | Sunday 05 April 2026 00:53:28 +0000 (0:00:06.074) 0:01:35.843 ********** 2026-04-05 00:59:49.678696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.678713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.678722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678738 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.678752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.678760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.678768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.678802 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.678810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.678824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.678841 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.678849 | orchestrator | 2026-04-05 00:59:49.678857 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-05 00:59:49.678865 | orchestrator | Sunday 05 April 2026 00:53:29 +0000 (0:00:01.257) 0:01:37.101 ********** 2026-04-05 00:59:49.678881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.678891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.678899 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.678907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.678915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.678923 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.678935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.678943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.678951 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.678959 | orchestrator | 2026-04-05 00:59:49.678967 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-05 00:59:49.678975 | orchestrator | Sunday 05 April 2026 00:53:31 +0000 (0:00:01.965) 0:01:39.066 ********** 2026-04-05 00:59:49.678983 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.678991 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.678999 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.679080 | orchestrator | 2026-04-05 00:59:49.679091 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-05 00:59:49.679099 | orchestrator | Sunday 05 April 2026 00:53:33 +0000 (0:00:01.667) 0:01:40.734 ********** 2026-04-05 00:59:49.679107 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.679114 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.679122 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.679130 | orchestrator | 2026-04-05 00:59:49.679138 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-05 00:59:49.679146 | orchestrator | Sunday 05 April 2026 00:53:35 +0000 (0:00:02.564) 0:01:43.298 ********** 2026-04-05 00:59:49.679154 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.679162 | orchestrator | 2026-04-05 00:59:49.679169 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-05 00:59:49.679177 | orchestrator | Sunday 05 April 2026 00:53:36 +0000 (0:00:00.694) 0:01:43.993 ********** 2026-04-05 00:59:49.679192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.679210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.679241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.679275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679323 | orchestrator | 2026-04-05 00:59:49.679331 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-05 00:59:49.679339 | orchestrator | Sunday 05 April 2026 00:53:44 +0000 (0:00:07.872) 0:01:51.865 ********** 2026-04-05 00:59:49.679348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.679361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679395 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.679403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.679416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679433 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.679446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.679473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.679490 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.679525 | orchestrator | 2026-04-05 00:59:49.679534 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-05 00:59:49.679573 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:00.761) 0:01:52.627 ********** 2026-04-05 00:59:49.679608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.679635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.679645 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.679661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.679669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.679716 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.679725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.679733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.679741 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.679749 | orchestrator | 2026-04-05 00:59:49.679757 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-05 00:59:49.679771 | orchestrator | Sunday 05 April 2026 00:53:46 +0000 (0:00:00.923) 0:01:53.550 ********** 2026-04-05 00:59:49.679779 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.679786 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.679795 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.679802 | orchestrator | 2026-04-05 00:59:49.679810 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-05 00:59:49.679818 | orchestrator | Sunday 05 April 2026 00:53:47 +0000 (0:00:01.743) 0:01:55.294 ********** 2026-04-05 00:59:49.679826 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.679834 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.679842 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.679850 | orchestrator | 2026-04-05 00:59:49.679858 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-05 00:59:49.679865 | orchestrator | Sunday 05 April 2026 00:53:49 +0000 (0:00:02.231) 0:01:57.526 ********** 2026-04-05 00:59:49.679873 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.679881 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.679889 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.679897 | orchestrator | 2026-04-05 00:59:49.679910 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-05 00:59:49.679919 | orchestrator | Sunday 05 April 2026 00:53:50 +0000 (0:00:00.396) 0:01:57.922 ********** 2026-04-05 00:59:49.679926 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.679934 | orchestrator | 2026-04-05 00:59:49.679952 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-05 00:59:49.679960 | orchestrator | Sunday 05 April 2026 00:53:51 +0000 (0:00:00.707) 0:01:58.630 ********** 2026-04-05 00:59:49.679969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 00:59:49.679978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 00:59:49.679991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 00:59:49.680005 | orchestrator | 2026-04-05 00:59:49.680111 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-05 00:59:49.680125 | orchestrator | Sunday 05 April 2026 00:53:55 +0000 (0:00:04.544) 0:02:03.174 ********** 2026-04-05 00:59:49.680133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 00:59:49.680142 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.680157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 00:59:49.680166 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.680174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 00:59:49.680182 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.680190 | orchestrator | 2026-04-05 00:59:49.680198 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-05 00:59:49.680206 | orchestrator | Sunday 05 April 2026 00:53:58 +0000 (0:00:02.461) 0:02:05.635 ********** 2026-04-05 00:59:49.680214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:59:49.680228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:59:49.680270 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.680278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:59:49.680287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:59:49.680295 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.680303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:59:49.680316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:59:49.680357 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.680366 | orchestrator | 2026-04-05 00:59:49.680374 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-05 00:59:49.680382 | orchestrator | Sunday 05 April 2026 00:54:00 +0000 (0:00:02.812) 0:02:08.447 ********** 2026-04-05 00:59:49.680391 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.680401 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.680410 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.680420 | orchestrator | 2026-04-05 00:59:49.680430 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-05 00:59:49.680440 | orchestrator | Sunday 05 April 2026 00:54:01 +0000 (0:00:00.433) 0:02:08.881 ********** 2026-04-05 00:59:49.680449 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.680459 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.680496 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.680506 | orchestrator | 2026-04-05 00:59:49.680517 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-05 00:59:49.680527 | orchestrator | Sunday 05 April 2026 00:54:03 +0000 (0:00:01.806) 0:02:10.688 ********** 2026-04-05 00:59:49.680536 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.680546 | orchestrator | 2026-04-05 00:59:49.680556 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-05 00:59:49.680565 | orchestrator | Sunday 05 April 2026 00:54:04 +0000 (0:00:00.997) 0:02:11.685 ********** 2026-04-05 00:59:49.680615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.680633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.680729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.680825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680867 | orchestrator | 2026-04-05 00:59:49.680876 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-05 00:59:49.680886 | orchestrator | Sunday 05 April 2026 00:54:11 +0000 (0:00:07.835) 0:02:19.520 ********** 2026-04-05 00:59:49.680896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.680907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680951 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.680961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.680972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.680982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.686375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.686468 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.686492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.686530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.686550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.686562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.686574 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.686585 | orchestrator | 2026-04-05 00:59:49.686597 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-05 00:59:49.686609 | orchestrator | Sunday 05 April 2026 00:54:13 +0000 (0:00:01.721) 0:02:21.242 ********** 2026-04-05 00:59:49.686621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.686659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.686679 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.686696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.686727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.686745 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.686765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.686784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.686803 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.686822 | orchestrator | 2026-04-05 00:59:49.686839 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-05 00:59:49.686856 | orchestrator | Sunday 05 April 2026 00:54:15 +0000 (0:00:01.941) 0:02:23.183 ********** 2026-04-05 00:59:49.686876 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.686895 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.686914 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.686927 | orchestrator | 2026-04-05 00:59:49.686938 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-05 00:59:49.686949 | orchestrator | Sunday 05 April 2026 00:54:17 +0000 (0:00:01.416) 0:02:24.599 ********** 2026-04-05 00:59:49.686960 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.686970 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.686981 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.686991 | orchestrator | 2026-04-05 00:59:49.687002 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-05 00:59:49.687054 | orchestrator | Sunday 05 April 2026 00:54:19 +0000 (0:00:02.292) 0:02:26.892 ********** 2026-04-05 00:59:49.687066 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.687076 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.687087 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.687098 | orchestrator | 2026-04-05 00:59:49.687109 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-05 00:59:49.687126 | orchestrator | Sunday 05 April 2026 00:54:19 +0000 (0:00:00.322) 0:02:27.215 ********** 2026-04-05 00:59:49.687138 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.687149 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.687160 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.687170 | orchestrator | 2026-04-05 00:59:49.687181 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-05 00:59:49.687192 | orchestrator | Sunday 05 April 2026 00:54:20 +0000 (0:00:00.598) 0:02:27.813 ********** 2026-04-05 00:59:49.687203 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.687214 | orchestrator | 2026-04-05 00:59:49.687227 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-05 00:59:49.687246 | orchestrator | Sunday 05 April 2026 00:54:21 +0000 (0:00:00.905) 0:02:28.719 ********** 2026-04-05 00:59:49.687277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.687343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.687362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:59:49.687378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:59:49.687424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.687523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:59:49.687570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687763 | orchestrator | 2026-04-05 00:59:49.687782 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-05 00:59:49.687794 | orchestrator | Sunday 05 April 2026 00:54:25 +0000 (0:00:04.053) 0:02:32.773 ********** 2026-04-05 00:59:49.687806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.687824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:59:49.687836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.687855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:59:49.687886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.687990 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.688002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.688074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.688086 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.688103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.688122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:59:49.688134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.688153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.688165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.688176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.688192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.688210 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.688221 | orchestrator | 2026-04-05 00:59:49.688232 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-05 00:59:49.688243 | orchestrator | Sunday 05 April 2026 00:54:26 +0000 (0:00:01.427) 0:02:34.201 ********** 2026-04-05 00:59:49.688255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.688268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.688279 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.688290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.688307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.688319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.688330 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.688349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.688360 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.688371 | orchestrator | 2026-04-05 00:59:49.688382 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-05 00:59:49.688393 | orchestrator | Sunday 05 April 2026 00:54:27 +0000 (0:00:01.234) 0:02:35.440 ********** 2026-04-05 00:59:49.688404 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.688414 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.688425 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.688435 | orchestrator | 2026-04-05 00:59:49.688446 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-05 00:59:49.688457 | orchestrator | Sunday 05 April 2026 00:54:29 +0000 (0:00:01.426) 0:02:36.866 ********** 2026-04-05 00:59:49.688467 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.688478 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.688489 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.688499 | orchestrator | 2026-04-05 00:59:49.688510 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-05 00:59:49.688520 | orchestrator | Sunday 05 April 2026 00:54:31 +0000 (0:00:02.128) 0:02:38.995 ********** 2026-04-05 00:59:49.688531 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.688542 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.688553 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.688563 | orchestrator | 2026-04-05 00:59:49.688574 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-05 00:59:49.688585 | orchestrator | Sunday 05 April 2026 00:54:31 +0000 (0:00:00.335) 0:02:39.331 ********** 2026-04-05 00:59:49.688602 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.688613 | orchestrator | 2026-04-05 00:59:49.688624 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-05 00:59:49.688634 | orchestrator | Sunday 05 April 2026 00:54:32 +0000 (0:00:01.084) 0:02:40.416 ********** 2026-04-05 00:59:49.688652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 00:59:49.688675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.688700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 00:59:49.688721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.688739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 00:59:49.688769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.688782 | orchestrator | 2026-04-05 00:59:49.688793 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-05 00:59:49.688804 | orchestrator | Sunday 05 April 2026 00:54:37 +0000 (0:00:04.868) 0:02:45.284 ********** 2026-04-05 00:59:49.688816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 00:59:49.688840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.688853 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.688873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 00:59:49.688897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.688909 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.688929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 00:59:49.688952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.688964 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.688975 | orchestrator | 2026-04-05 00:59:49.688986 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-05 00:59:49.688997 | orchestrator | Sunday 05 April 2026 00:54:42 +0000 (0:00:04.546) 0:02:49.831 ********** 2026-04-05 00:59:49.689037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:59:49.689057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:59:49.689076 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.689087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:59:49.689099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:59:49.689111 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.689122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:59:49.689138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:59:49.689150 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.689160 | orchestrator | 2026-04-05 00:59:49.689171 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-05 00:59:49.689182 | orchestrator | Sunday 05 April 2026 00:54:47 +0000 (0:00:05.212) 0:02:55.044 ********** 2026-04-05 00:59:49.689193 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.689204 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.689214 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.689225 | orchestrator | 2026-04-05 00:59:49.689235 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-05 00:59:49.689246 | orchestrator | Sunday 05 April 2026 00:54:48 +0000 (0:00:01.340) 0:02:56.385 ********** 2026-04-05 00:59:49.689257 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.689268 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.689278 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.689289 | orchestrator | 2026-04-05 00:59:49.689299 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-05 00:59:49.689310 | orchestrator | Sunday 05 April 2026 00:54:51 +0000 (0:00:02.397) 0:02:58.782 ********** 2026-04-05 00:59:49.689321 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.689332 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.689342 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.689353 | orchestrator | 2026-04-05 00:59:49.689363 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-05 00:59:49.689374 | orchestrator | Sunday 05 April 2026 00:54:51 +0000 (0:00:00.421) 0:02:59.204 ********** 2026-04-05 00:59:49.689391 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.689401 | orchestrator | 2026-04-05 00:59:49.689412 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-05 00:59:49.689423 | orchestrator | Sunday 05 April 2026 00:54:53 +0000 (0:00:01.430) 0:03:00.634 ********** 2026-04-05 00:59:49.689442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.689454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.689465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.689476 | orchestrator | 2026-04-05 00:59:49.689487 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-05 00:59:49.689498 | orchestrator | Sunday 05 April 2026 00:54:57 +0000 (0:00:04.653) 0:03:05.288 ********** 2026-04-05 00:59:49.689514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.689525 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.689536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.689554 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.689572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.689584 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.689595 | orchestrator | 2026-04-05 00:59:49.689605 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-05 00:59:49.689616 | orchestrator | Sunday 05 April 2026 00:54:58 +0000 (0:00:00.535) 0:03:05.823 ********** 2026-04-05 00:59:49.689627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.689638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.689649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.689660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.689671 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.689682 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.689693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.689704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.689715 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.689725 | orchestrator | 2026-04-05 00:59:49.689741 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-05 00:59:49.689752 | orchestrator | Sunday 05 April 2026 00:54:59 +0000 (0:00:01.444) 0:03:07.268 ********** 2026-04-05 00:59:49.689762 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.689773 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.689784 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.689795 | orchestrator | 2026-04-05 00:59:49.689805 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-05 00:59:49.689823 | orchestrator | Sunday 05 April 2026 00:55:01 +0000 (0:00:01.645) 0:03:08.914 ********** 2026-04-05 00:59:49.689833 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.689844 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.689858 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.689876 | orchestrator | 2026-04-05 00:59:49.689895 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-05 00:59:49.689913 | orchestrator | Sunday 05 April 2026 00:55:04 +0000 (0:00:03.079) 0:03:11.993 ********** 2026-04-05 00:59:49.689929 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.689946 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.689963 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.689980 | orchestrator | 2026-04-05 00:59:49.689999 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-05 00:59:49.690160 | orchestrator | Sunday 05 April 2026 00:55:04 +0000 (0:00:00.492) 0:03:12.485 ********** 2026-04-05 00:59:49.690174 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.690185 | orchestrator | 2026-04-05 00:59:49.690196 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-05 00:59:49.690207 | orchestrator | Sunday 05 April 2026 00:55:06 +0000 (0:00:01.437) 0:03:13.922 ********** 2026-04-05 00:59:49.690233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:59:49.690255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:59:49.690301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:59:49.690314 | orchestrator | 2026-04-05 00:59:49.690326 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-05 00:59:49.690343 | orchestrator | Sunday 05 April 2026 00:55:11 +0000 (0:00:05.234) 0:03:19.157 ********** 2026-04-05 00:59:49.690368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:59:49.690381 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.690399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:59:49.690418 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.690437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:59:49.690449 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.690460 | orchestrator | 2026-04-05 00:59:49.690471 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-05 00:59:49.690481 | orchestrator | Sunday 05 April 2026 00:55:13 +0000 (0:00:01.401) 0:03:20.558 ********** 2026-04-05 00:59:49.690492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:59:49.690504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:59:49.690516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:59:49.690536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:59:49.690552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 00:59:49.690565 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.690576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:59:49.690587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:59:49.690598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:59:49.690610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:59:49.690620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:59:49.690635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:59:49.690646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:59:49.690656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 00:59:49.690666 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.690675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:59:49.690691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 00:59:49.690701 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.690710 | orchestrator | 2026-04-05 00:59:49.690720 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-05 00:59:49.690729 | orchestrator | Sunday 05 April 2026 00:55:15 +0000 (0:00:02.066) 0:03:22.624 ********** 2026-04-05 00:59:49.690739 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.690749 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.690758 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.690768 | orchestrator | 2026-04-05 00:59:49.690777 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-05 00:59:49.690787 | orchestrator | Sunday 05 April 2026 00:55:16 +0000 (0:00:01.625) 0:03:24.250 ********** 2026-04-05 00:59:49.690796 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.690806 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.690815 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.690825 | orchestrator | 2026-04-05 00:59:49.690834 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-05 00:59:49.690844 | orchestrator | Sunday 05 April 2026 00:55:19 +0000 (0:00:02.383) 0:03:26.634 ********** 2026-04-05 00:59:49.690853 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.690863 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.690877 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.690886 | orchestrator | 2026-04-05 00:59:49.690896 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-05 00:59:49.690906 | orchestrator | Sunday 05 April 2026 00:55:19 +0000 (0:00:00.360) 0:03:26.995 ********** 2026-04-05 00:59:49.690915 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.690925 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.690934 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.690944 | orchestrator | 2026-04-05 00:59:49.690953 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-05 00:59:49.690963 | orchestrator | Sunday 05 April 2026 00:55:19 +0000 (0:00:00.348) 0:03:27.343 ********** 2026-04-05 00:59:49.690972 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.690982 | orchestrator | 2026-04-05 00:59:49.690991 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-05 00:59:49.691001 | orchestrator | Sunday 05 April 2026 00:55:21 +0000 (0:00:01.417) 0:03:28.761 ********** 2026-04-05 00:59:49.691030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:59:49.691048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:59:49.691067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:59:49.691078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:59:49.691092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:59:49.691103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:59:49.691120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:59:49.691138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:59:49.691148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:59:49.691159 | orchestrator | 2026-04-05 00:59:49.691168 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-05 00:59:49.691178 | orchestrator | Sunday 05 April 2026 00:55:25 +0000 (0:00:04.439) 0:03:33.200 ********** 2026-04-05 00:59:49.691189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:59:49.691200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:59:49.691233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:59:49.691250 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.691268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:59:49.691279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:59:49.691293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:59:49.691304 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.691314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:59:49.691325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:59:49.691350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:59:49.691361 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.691371 | orchestrator | 2026-04-05 00:59:49.691380 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-05 00:59:49.691390 | orchestrator | Sunday 05 April 2026 00:55:26 +0000 (0:00:00.755) 0:03:33.955 ********** 2026-04-05 00:59:49.691401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:59:49.691411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:59:49.691422 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.691432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:59:49.691442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:59:49.691452 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.691462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:59:49.691476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:59:49.691486 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.691497 | orchestrator | 2026-04-05 00:59:49.691506 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-05 00:59:49.691516 | orchestrator | Sunday 05 April 2026 00:55:27 +0000 (0:00:01.229) 0:03:35.184 ********** 2026-04-05 00:59:49.691526 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.691535 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.691545 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.691554 | orchestrator | 2026-04-05 00:59:49.691564 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-05 00:59:49.691574 | orchestrator | Sunday 05 April 2026 00:55:28 +0000 (0:00:01.242) 0:03:36.427 ********** 2026-04-05 00:59:49.691590 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.691599 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.691609 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.691618 | orchestrator | 2026-04-05 00:59:49.691628 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-05 00:59:49.691637 | orchestrator | Sunday 05 April 2026 00:55:31 +0000 (0:00:02.256) 0:03:38.683 ********** 2026-04-05 00:59:49.691647 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.691656 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.691666 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.691675 | orchestrator | 2026-04-05 00:59:49.691685 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-05 00:59:49.691695 | orchestrator | Sunday 05 April 2026 00:55:31 +0000 (0:00:00.653) 0:03:39.337 ********** 2026-04-05 00:59:49.691705 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.691714 | orchestrator | 2026-04-05 00:59:49.691724 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-05 00:59:49.691734 | orchestrator | Sunday 05 April 2026 00:55:33 +0000 (0:00:02.125) 0:03:41.463 ********** 2026-04-05 00:59:49.691750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.691762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.691777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.691794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.691810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.691821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.691831 | orchestrator | 2026-04-05 00:59:49.691840 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-05 00:59:49.691850 | orchestrator | Sunday 05 April 2026 00:55:38 +0000 (0:00:04.954) 0:03:46.417 ********** 2026-04-05 00:59:49.691860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.691876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.691892 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.691903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.691920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.691930 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.691940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.691956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.691971 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.691981 | orchestrator | 2026-04-05 00:59:49.691991 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-05 00:59:49.692000 | orchestrator | Sunday 05 April 2026 00:55:40 +0000 (0:00:01.120) 0:03:47.537 ********** 2026-04-05 00:59:49.692026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692047 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.692057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692077 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.692087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692107 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.692117 | orchestrator | 2026-04-05 00:59:49.692132 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-05 00:59:49.692142 | orchestrator | Sunday 05 April 2026 00:55:41 +0000 (0:00:01.363) 0:03:48.901 ********** 2026-04-05 00:59:49.692152 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.692161 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.692171 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.692181 | orchestrator | 2026-04-05 00:59:49.692190 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-05 00:59:49.692200 | orchestrator | Sunday 05 April 2026 00:55:42 +0000 (0:00:01.280) 0:03:50.181 ********** 2026-04-05 00:59:49.692210 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.692219 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.692229 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.692238 | orchestrator | 2026-04-05 00:59:49.692248 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-05 00:59:49.692257 | orchestrator | Sunday 05 April 2026 00:55:44 +0000 (0:00:02.114) 0:03:52.296 ********** 2026-04-05 00:59:49.692267 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.692277 | orchestrator | 2026-04-05 00:59:49.692286 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-05 00:59:49.692296 | orchestrator | Sunday 05 April 2026 00:55:46 +0000 (0:00:01.346) 0:03:53.643 ********** 2026-04-05 00:59:49.692306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.692328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.692375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.692406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692468 | orchestrator | 2026-04-05 00:59:49.692478 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-05 00:59:49.692488 | orchestrator | Sunday 05 April 2026 00:55:51 +0000 (0:00:05.048) 0:03:58.691 ********** 2026-04-05 00:59:49.692503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.692513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692548 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.692558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.692579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.692593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692656 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.692666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.692676 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.692686 | orchestrator | 2026-04-05 00:59:49.692695 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-05 00:59:49.692705 | orchestrator | Sunday 05 April 2026 00:55:52 +0000 (0:00:01.123) 0:03:59.814 ********** 2026-04-05 00:59:49.692715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692740 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.692751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692770 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.692780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.692799 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.692809 | orchestrator | 2026-04-05 00:59:49.692819 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-05 00:59:49.692828 | orchestrator | Sunday 05 April 2026 00:55:53 +0000 (0:00:01.599) 0:04:01.414 ********** 2026-04-05 00:59:49.692838 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.692848 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.692857 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.692867 | orchestrator | 2026-04-05 00:59:49.692876 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-05 00:59:49.692892 | orchestrator | Sunday 05 April 2026 00:55:55 +0000 (0:00:01.480) 0:04:02.895 ********** 2026-04-05 00:59:49.692901 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.692911 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.692920 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.692930 | orchestrator | 2026-04-05 00:59:49.692939 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-05 00:59:49.693190 | orchestrator | Sunday 05 April 2026 00:55:57 +0000 (0:00:02.373) 0:04:05.268 ********** 2026-04-05 00:59:49.693211 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.693221 | orchestrator | 2026-04-05 00:59:49.693231 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-05 00:59:49.693240 | orchestrator | Sunday 05 April 2026 00:55:58 +0000 (0:00:01.176) 0:04:06.444 ********** 2026-04-05 00:59:49.693250 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 00:59:49.693260 | orchestrator | 2026-04-05 00:59:49.693270 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-05 00:59:49.693279 | orchestrator | Sunday 05 April 2026 00:56:02 +0000 (0:00:03.338) 0:04:09.783 ********** 2026-04-05 00:59:49.693298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:49.693311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:59:49.693322 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.693340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:49.693362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:59:49.693372 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.693387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:49.693405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:59:49.693415 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.693425 | orchestrator | 2026-04-05 00:59:49.693434 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-05 00:59:49.693444 | orchestrator | Sunday 05 April 2026 00:56:05 +0000 (0:00:03.528) 0:04:13.312 ********** 2026-04-05 00:59:49.693462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:49.693478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:59:49.693489 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.693504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:49.693521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:59:49.693531 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.693546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:49.693557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:59:49.693573 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.693583 | orchestrator | 2026-04-05 00:59:49.693592 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-05 00:59:49.693599 | orchestrator | Sunday 05 April 2026 00:56:08 +0000 (0:00:03.143) 0:04:16.455 ********** 2026-04-05 00:59:49.693608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:59:49.693621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:59:49.693629 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.693642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:59:49.693651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:59:49.693660 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.693672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:59:49.693680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:59:49.693696 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.693704 | orchestrator | 2026-04-05 00:59:49.693712 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-05 00:59:49.693720 | orchestrator | Sunday 05 April 2026 00:56:12 +0000 (0:00:03.720) 0:04:20.176 ********** 2026-04-05 00:59:49.693728 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.693736 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.693743 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.693751 | orchestrator | 2026-04-05 00:59:49.693759 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-05 00:59:49.693769 | orchestrator | Sunday 05 April 2026 00:56:15 +0000 (0:00:02.385) 0:04:22.561 ********** 2026-04-05 00:59:49.693778 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.693787 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.693796 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.693806 | orchestrator | 2026-04-05 00:59:49.693815 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-05 00:59:49.693825 | orchestrator | Sunday 05 April 2026 00:56:16 +0000 (0:00:01.343) 0:04:23.905 ********** 2026-04-05 00:59:49.693834 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.693843 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.693851 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.693859 | orchestrator | 2026-04-05 00:59:49.693866 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-05 00:59:49.693874 | orchestrator | Sunday 05 April 2026 00:56:17 +0000 (0:00:00.692) 0:04:24.598 ********** 2026-04-05 00:59:49.693882 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.693890 | orchestrator | 2026-04-05 00:59:49.693898 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-05 00:59:49.693910 | orchestrator | Sunday 05 April 2026 00:56:18 +0000 (0:00:01.236) 0:04:25.834 ********** 2026-04-05 00:59:49.693918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:59:49.693927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:59:49.693948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:59:49.693957 | orchestrator | 2026-04-05 00:59:49.693965 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-05 00:59:49.693973 | orchestrator | Sunday 05 April 2026 00:56:20 +0000 (0:00:01.976) 0:04:27.810 ********** 2026-04-05 00:59:49.693981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:59:49.693989 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.693997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:59:49.694147 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.694218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:59:49.694230 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.694238 | orchestrator | 2026-04-05 00:59:49.694246 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-05 00:59:49.694254 | orchestrator | Sunday 05 April 2026 00:56:20 +0000 (0:00:00.478) 0:04:28.289 ********** 2026-04-05 00:59:49.694262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 00:59:49.694281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 00:59:49.694289 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.694297 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.694305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 00:59:49.694313 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.694321 | orchestrator | 2026-04-05 00:59:49.694329 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-05 00:59:49.694337 | orchestrator | Sunday 05 April 2026 00:56:21 +0000 (0:00:00.724) 0:04:29.014 ********** 2026-04-05 00:59:49.694345 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.694353 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.694361 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.694369 | orchestrator | 2026-04-05 00:59:49.694382 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-05 00:59:49.694390 | orchestrator | Sunday 05 April 2026 00:56:22 +0000 (0:00:01.280) 0:04:30.295 ********** 2026-04-05 00:59:49.694398 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.694406 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.694414 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.694421 | orchestrator | 2026-04-05 00:59:49.694429 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-05 00:59:49.694437 | orchestrator | Sunday 05 April 2026 00:56:24 +0000 (0:00:01.792) 0:04:32.087 ********** 2026-04-05 00:59:49.694445 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.694453 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.694461 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.694469 | orchestrator | 2026-04-05 00:59:49.694477 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-05 00:59:49.694485 | orchestrator | Sunday 05 April 2026 00:56:25 +0000 (0:00:00.508) 0:04:32.596 ********** 2026-04-05 00:59:49.694493 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.694501 | orchestrator | 2026-04-05 00:59:49.694508 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-05 00:59:49.694516 | orchestrator | Sunday 05 April 2026 00:56:26 +0000 (0:00:01.172) 0:04:33.769 ********** 2026-04-05 00:59:49.694525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.694590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.694610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:59:49.694623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:59:49.694633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.694643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.694727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.694759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:59:49.694774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.694792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.694808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.694822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:59:49.694896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.694915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.694924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.694937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:59:49.694946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.695058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:59:49.695079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.695088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.695110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.695120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.695187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:59:49.695200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.695221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:59:49.695229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:59:49.695310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:59:49.695319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.695344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.695353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.695361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:59:49.695460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.695468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.695483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.695491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:59:49.695553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.695563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.695585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.695593 | orchestrator | 2026-04-05 00:59:49.695602 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-05 00:59:49.695610 | orchestrator | Sunday 05 April 2026 00:56:32 +0000 (0:00:06.277) 0:04:40.046 ********** 2026-04-05 00:59:49.695618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.695684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:59:49.695710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:59:49.695720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.695797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.695810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:59:49.695819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.695827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:59:49.695854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.695863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.695947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.695956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.695971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.695980 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.696173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:59:49.696206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.696216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:59:49.696229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.696249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.696352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:59:49.696375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.696387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:59:49.696408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.696431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.696445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:59:49.696541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.696558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.696566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.696575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.696589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:59:49.696604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:59:49.696611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.696680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.696699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.696712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:59:49.696724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.696738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:59:49.696745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.696809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.696820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:59:49.696827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.696843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:59:49.696851 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.696858 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.696865 | orchestrator | 2026-04-05 00:59:49.696872 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-05 00:59:49.696880 | orchestrator | Sunday 05 April 2026 00:56:34 +0000 (0:00:01.594) 0:04:41.640 ********** 2026-04-05 00:59:49.696887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.696895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.696903 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.696910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.696917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.696924 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.696985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.696995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.697003 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.697032 | orchestrator | 2026-04-05 00:59:49.697039 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-05 00:59:49.697046 | orchestrator | Sunday 05 April 2026 00:56:35 +0000 (0:00:01.632) 0:04:43.273 ********** 2026-04-05 00:59:49.697053 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.697059 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.697066 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.697073 | orchestrator | 2026-04-05 00:59:49.697083 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-05 00:59:49.697093 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:01.256) 0:04:44.530 ********** 2026-04-05 00:59:49.697105 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.697117 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.697132 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.697152 | orchestrator | 2026-04-05 00:59:49.697163 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-05 00:59:49.697174 | orchestrator | Sunday 05 April 2026 00:56:38 +0000 (0:00:01.694) 0:04:46.224 ********** 2026-04-05 00:59:49.697185 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.697196 | orchestrator | 2026-04-05 00:59:49.697207 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-05 00:59:49.697217 | orchestrator | Sunday 05 April 2026 00:56:40 +0000 (0:00:01.449) 0:04:47.674 ********** 2026-04-05 00:59:49.697237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:59:49.697251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:59:49.697324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:59:49.697335 | orchestrator | 2026-04-05 00:59:49.697343 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-05 00:59:49.697350 | orchestrator | Sunday 05 April 2026 00:56:44 +0000 (0:00:04.389) 0:04:52.063 ********** 2026-04-05 00:59:49.697364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:59:49.697371 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.697386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:59:49.697393 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.697401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:59:49.697408 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.697415 | orchestrator | 2026-04-05 00:59:49.697421 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-05 00:59:49.697428 | orchestrator | Sunday 05 April 2026 00:56:45 +0000 (0:00:01.306) 0:04:53.369 ********** 2026-04-05 00:59:49.697506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.697525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.697543 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.697555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.697567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.697580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.697591 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.697604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.697611 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.697618 | orchestrator | 2026-04-05 00:59:49.697625 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-05 00:59:49.697631 | orchestrator | Sunday 05 April 2026 00:56:46 +0000 (0:00:01.042) 0:04:54.412 ********** 2026-04-05 00:59:49.697638 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.697645 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.697651 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.697658 | orchestrator | 2026-04-05 00:59:49.697664 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-05 00:59:49.697671 | orchestrator | Sunday 05 April 2026 00:56:48 +0000 (0:00:01.421) 0:04:55.833 ********** 2026-04-05 00:59:49.697678 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.697684 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.697691 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.697697 | orchestrator | 2026-04-05 00:59:49.697704 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-05 00:59:49.697715 | orchestrator | Sunday 05 April 2026 00:56:51 +0000 (0:00:02.846) 0:04:58.680 ********** 2026-04-05 00:59:49.697722 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.697730 | orchestrator | 2026-04-05 00:59:49.697737 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-05 00:59:49.697744 | orchestrator | Sunday 05 April 2026 00:56:53 +0000 (0:00:02.378) 0:05:01.058 ********** 2026-04-05 00:59:49.697751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.697833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.697845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.697857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.697865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.697873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.697939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.697949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.697956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.697968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.697976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.698073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.698087 | orchestrator | 2026-04-05 00:59:49.698094 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-05 00:59:49.698101 | orchestrator | Sunday 05 April 2026 00:57:02 +0000 (0:00:08.606) 0:05:09.665 ********** 2026-04-05 00:59:49.698108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.698121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.698129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.698142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.698150 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.698183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.698192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.698204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.698211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.698224 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.698232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.698261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.698270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.698277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.698284 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.698290 | orchestrator | 2026-04-05 00:59:49.698301 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-05 00:59:49.698309 | orchestrator | Sunday 05 April 2026 00:57:03 +0000 (0:00:01.395) 0:05:11.061 ********** 2026-04-05 00:59:49.698316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698371 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.698378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698423 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.698430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.698457 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.698464 | orchestrator | 2026-04-05 00:59:49.698472 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-05 00:59:49.698478 | orchestrator | Sunday 05 April 2026 00:57:06 +0000 (0:00:02.500) 0:05:13.561 ********** 2026-04-05 00:59:49.698485 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.698492 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.698499 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.698505 | orchestrator | 2026-04-05 00:59:49.698512 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-05 00:59:49.698519 | orchestrator | Sunday 05 April 2026 00:57:07 +0000 (0:00:01.314) 0:05:14.876 ********** 2026-04-05 00:59:49.698531 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.698537 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.698544 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.698551 | orchestrator | 2026-04-05 00:59:49.698557 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-05 00:59:49.698564 | orchestrator | Sunday 05 April 2026 00:57:09 +0000 (0:00:02.212) 0:05:17.089 ********** 2026-04-05 00:59:49.698578 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.698587 | orchestrator | 2026-04-05 00:59:49.698595 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-05 00:59:49.698603 | orchestrator | Sunday 05 April 2026 00:57:11 +0000 (0:00:01.685) 0:05:18.774 ********** 2026-04-05 00:59:49.698610 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-05 00:59:49.698619 | orchestrator | 2026-04-05 00:59:49.698627 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-05 00:59:49.698635 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:01.004) 0:05:19.779 ********** 2026-04-05 00:59:49.698643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 00:59:49.698652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 00:59:49.698682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 00:59:49.698691 | orchestrator | 2026-04-05 00:59:49.698698 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-05 00:59:49.698706 | orchestrator | Sunday 05 April 2026 00:57:16 +0000 (0:00:04.726) 0:05:24.505 ********** 2026-04-05 00:59:49.698715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:59:49.698723 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.698732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:59:49.698745 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.698751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:59:49.698758 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.698765 | orchestrator | 2026-04-05 00:59:49.698771 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-05 00:59:49.698778 | orchestrator | Sunday 05 April 2026 00:57:18 +0000 (0:00:01.992) 0:05:26.498 ********** 2026-04-05 00:59:49.698789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:59:49.698796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:59:49.698803 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.698810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:59:49.698817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:59:49.698824 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.698830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:59:49.698837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:59:49.698845 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.698851 | orchestrator | 2026-04-05 00:59:49.698858 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 00:59:49.698864 | orchestrator | Sunday 05 April 2026 00:57:20 +0000 (0:00:01.527) 0:05:28.025 ********** 2026-04-05 00:59:49.698871 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.698878 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.698885 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.698891 | orchestrator | 2026-04-05 00:59:49.698921 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 00:59:49.698929 | orchestrator | Sunday 05 April 2026 00:57:23 +0000 (0:00:02.685) 0:05:30.710 ********** 2026-04-05 00:59:49.698936 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.698943 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.698949 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.698956 | orchestrator | 2026-04-05 00:59:49.698963 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-05 00:59:49.698969 | orchestrator | Sunday 05 April 2026 00:57:26 +0000 (0:00:03.610) 0:05:34.321 ********** 2026-04-05 00:59:49.698981 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-05 00:59:49.698988 | orchestrator | 2026-04-05 00:59:49.698994 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-05 00:59:49.699001 | orchestrator | Sunday 05 April 2026 00:57:28 +0000 (0:00:01.305) 0:05:35.626 ********** 2026-04-05 00:59:49.699058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:59:49.699068 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.699075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:59:49.699082 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.699093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:59:49.699100 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.699107 | orchestrator | 2026-04-05 00:59:49.699113 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-05 00:59:49.699120 | orchestrator | Sunday 05 April 2026 00:57:30 +0000 (0:00:02.643) 0:05:38.270 ********** 2026-04-05 00:59:49.699127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:59:49.699134 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.699140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:59:49.699147 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.699185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:59:49.699206 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.699216 | orchestrator | 2026-04-05 00:59:49.699226 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-05 00:59:49.699236 | orchestrator | Sunday 05 April 2026 00:57:32 +0000 (0:00:01.758) 0:05:40.028 ********** 2026-04-05 00:59:49.699246 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.699255 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.699266 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.699277 | orchestrator | 2026-04-05 00:59:49.699288 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 00:59:49.699299 | orchestrator | Sunday 05 April 2026 00:57:34 +0000 (0:00:01.985) 0:05:42.014 ********** 2026-04-05 00:59:49.699310 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.699321 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.699332 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.699343 | orchestrator | 2026-04-05 00:59:49.699351 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 00:59:49.699357 | orchestrator | Sunday 05 April 2026 00:57:37 +0000 (0:00:02.571) 0:05:44.585 ********** 2026-04-05 00:59:49.699364 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.699371 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.699377 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.699384 | orchestrator | 2026-04-05 00:59:49.699390 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-05 00:59:49.699397 | orchestrator | Sunday 05 April 2026 00:57:40 +0000 (0:00:03.036) 0:05:47.622 ********** 2026-04-05 00:59:49.699404 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-05 00:59:49.699411 | orchestrator | 2026-04-05 00:59:49.699417 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-05 00:59:49.699424 | orchestrator | Sunday 05 April 2026 00:57:42 +0000 (0:00:01.925) 0:05:49.548 ********** 2026-04-05 00:59:49.699431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:59:49.699438 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.699449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:59:49.699457 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.699463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:59:49.699476 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.699482 | orchestrator | 2026-04-05 00:59:49.699488 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-05 00:59:49.699494 | orchestrator | Sunday 05 April 2026 00:57:43 +0000 (0:00:01.329) 0:05:50.877 ********** 2026-04-05 00:59:49.699501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:59:49.699507 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.699541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:59:49.699548 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.699555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:59:49.699561 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.699567 | orchestrator | 2026-04-05 00:59:49.699573 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-05 00:59:49.699580 | orchestrator | Sunday 05 April 2026 00:57:45 +0000 (0:00:01.654) 0:05:52.532 ********** 2026-04-05 00:59:49.699586 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.699592 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.699598 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.699604 | orchestrator | 2026-04-05 00:59:49.699610 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 00:59:49.699617 | orchestrator | Sunday 05 April 2026 00:57:47 +0000 (0:00:02.116) 0:05:54.649 ********** 2026-04-05 00:59:49.699623 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.699629 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.699635 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.699641 | orchestrator | 2026-04-05 00:59:49.699647 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 00:59:49.699653 | orchestrator | Sunday 05 April 2026 00:57:49 +0000 (0:00:02.348) 0:05:56.997 ********** 2026-04-05 00:59:49.699659 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.699665 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.699671 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.699677 | orchestrator | 2026-04-05 00:59:49.699683 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-05 00:59:49.699689 | orchestrator | Sunday 05 April 2026 00:57:52 +0000 (0:00:03.383) 0:06:00.381 ********** 2026-04-05 00:59:49.699696 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.699706 | orchestrator | 2026-04-05 00:59:49.699713 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-05 00:59:49.699722 | orchestrator | Sunday 05 April 2026 00:57:54 +0000 (0:00:01.389) 0:06:01.770 ********** 2026-04-05 00:59:49.699729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 00:59:49.699737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:59:49.699764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.699772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.699778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.699788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 00:59:49.699799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:59:49.699806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.699830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.699837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.699844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 00:59:49.699858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:59:49.699865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.699871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.699894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.699901 | orchestrator | 2026-04-05 00:59:49.699908 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-05 00:59:49.699914 | orchestrator | Sunday 05 April 2026 00:57:58 +0000 (0:00:04.518) 0:06:06.289 ********** 2026-04-05 00:59:49.699921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 00:59:49.699927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:59:49.699944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.699951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.699957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.699963 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.699989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 00:59:49.699996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:59:49.700002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.700029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.700039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 00:59:49.700046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.700052 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.700078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:59:49.700085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.700092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:59:49.700103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:59:49.700109 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.700116 | orchestrator | 2026-04-05 00:59:49.700122 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-05 00:59:49.700131 | orchestrator | Sunday 05 April 2026 00:57:59 +0000 (0:00:00.943) 0:06:07.233 ********** 2026-04-05 00:59:49.700138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:59:49.700144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:59:49.700151 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.700158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:59:49.700164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:59:49.700170 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.700177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:59:49.700183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:59:49.700190 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.700196 | orchestrator | 2026-04-05 00:59:49.700202 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-05 00:59:49.700208 | orchestrator | Sunday 05 April 2026 00:58:00 +0000 (0:00:00.952) 0:06:08.185 ********** 2026-04-05 00:59:49.700214 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.700220 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.700227 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.700233 | orchestrator | 2026-04-05 00:59:49.700239 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-05 00:59:49.700263 | orchestrator | Sunday 05 April 2026 00:58:02 +0000 (0:00:01.749) 0:06:09.935 ********** 2026-04-05 00:59:49.700270 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.700276 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.700283 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.700289 | orchestrator | 2026-04-05 00:59:49.700295 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-05 00:59:49.700307 | orchestrator | Sunday 05 April 2026 00:58:04 +0000 (0:00:02.371) 0:06:12.307 ********** 2026-04-05 00:59:49.700313 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.700319 | orchestrator | 2026-04-05 00:59:49.700325 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-05 00:59:49.700332 | orchestrator | Sunday 05 April 2026 00:58:06 +0000 (0:00:01.419) 0:06:13.726 ********** 2026-04-05 00:59:49.700338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.700349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.700356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.700382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:59:49.700397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:59:49.700408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:59:49.700415 | orchestrator | 2026-04-05 00:59:49.700421 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-05 00:59:49.700428 | orchestrator | Sunday 05 April 2026 00:58:12 +0000 (0:00:06.627) 0:06:20.354 ********** 2026-04-05 00:59:49.700434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.700460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:59:49.700472 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.700479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.700489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:59:49.700496 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.700502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.700531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:59:49.700538 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.700545 | orchestrator | 2026-04-05 00:59:49.700551 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-05 00:59:49.700557 | orchestrator | Sunday 05 April 2026 00:58:13 +0000 (0:00:01.085) 0:06:21.439 ********** 2026-04-05 00:59:49.700564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.700571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.700577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:59:49.700584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:59:49.700591 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.700601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:59:49.700607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:59:49.700614 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.700620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.700626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:59:49.700637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:59:49.700643 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.700653 | orchestrator | 2026-04-05 00:59:49.700666 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-05 00:59:49.700682 | orchestrator | Sunday 05 April 2026 00:58:14 +0000 (0:00:01.057) 0:06:22.496 ********** 2026-04-05 00:59:49.700692 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.700702 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.700713 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.700724 | orchestrator | 2026-04-05 00:59:49.700733 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-05 00:59:49.700770 | orchestrator | Sunday 05 April 2026 00:58:15 +0000 (0:00:00.455) 0:06:22.951 ********** 2026-04-05 00:59:49.700778 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.700784 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.700790 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.700796 | orchestrator | 2026-04-05 00:59:49.700802 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-05 00:59:49.700809 | orchestrator | Sunday 05 April 2026 00:58:16 +0000 (0:00:01.522) 0:06:24.474 ********** 2026-04-05 00:59:49.700815 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.700821 | orchestrator | 2026-04-05 00:59:49.700827 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-05 00:59:49.700833 | orchestrator | Sunday 05 April 2026 00:58:18 +0000 (0:00:01.771) 0:06:26.246 ********** 2026-04-05 00:59:49.700841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 00:59:49.700853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:59:49.700860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.700876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.700882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.700908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 00:59:49.700916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:59:49.700923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.700933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.700939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.700950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 00:59:49.700975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:59:49.700983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.700989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.700995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.701025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.701038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:59:49.701064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.701085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.701097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:59:49.701104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.701140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:59:49.701205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:59:49.701227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.701268 | orchestrator | 2026-04-05 00:59:49.701275 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-05 00:59:49.701281 | orchestrator | Sunday 05 April 2026 00:58:23 +0000 (0:00:05.068) 0:06:31.314 ********** 2026-04-05 00:59:49.701288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 00:59:49.701298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 00:59:49.701310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:59:49.701317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:59:49.701323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.701384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.701391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.701402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.701409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:59:49.701422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:59:49.701429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.701467 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.701473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.701484 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.701493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 00:59:49.701500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:59:49.701507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.701530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:59:49.701544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:59:49.701550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:59:49.701563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:59:49.701569 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.701576 | orchestrator | 2026-04-05 00:59:49.701582 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-05 00:59:49.701591 | orchestrator | Sunday 05 April 2026 00:58:25 +0000 (0:00:01.652) 0:06:32.967 ********** 2026-04-05 00:59:49.701598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:59:49.701605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:59:49.701616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.701623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.701629 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.701637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:59:49.701655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:59:49.701667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.701679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.701690 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.701700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:59:49.701713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:59:49.701724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.701740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:59:49.701753 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.701759 | orchestrator | 2026-04-05 00:59:49.701765 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-05 00:59:49.701772 | orchestrator | Sunday 05 April 2026 00:58:26 +0000 (0:00:01.106) 0:06:34.074 ********** 2026-04-05 00:59:49.701778 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.701784 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.701790 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.701796 | orchestrator | 2026-04-05 00:59:49.701802 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-05 00:59:49.701808 | orchestrator | Sunday 05 April 2026 00:58:27 +0000 (0:00:00.613) 0:06:34.688 ********** 2026-04-05 00:59:49.701814 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.701820 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.701826 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.701832 | orchestrator | 2026-04-05 00:59:49.701838 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-05 00:59:49.701844 | orchestrator | Sunday 05 April 2026 00:58:28 +0000 (0:00:01.473) 0:06:36.161 ********** 2026-04-05 00:59:49.701851 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.701857 | orchestrator | 2026-04-05 00:59:49.701863 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-05 00:59:49.701869 | orchestrator | Sunday 05 April 2026 00:58:30 +0000 (0:00:01.834) 0:06:37.996 ********** 2026-04-05 00:59:49.701879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:59:49.701887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:59:49.701897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:59:49.701908 | orchestrator | 2026-04-05 00:59:49.701914 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-05 00:59:49.701920 | orchestrator | Sunday 05 April 2026 00:58:33 +0000 (0:00:03.365) 0:06:41.361 ********** 2026-04-05 00:59:49.701927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:59:49.701937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:59:49.701944 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.701950 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.701957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:59:49.701963 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.701973 | orchestrator | 2026-04-05 00:59:49.701979 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-05 00:59:49.701985 | orchestrator | Sunday 05 April 2026 00:58:34 +0000 (0:00:00.455) 0:06:41.816 ********** 2026-04-05 00:59:49.701992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 00:59:49.701998 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.702004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 00:59:49.702065 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.702076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 00:59:49.702082 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.702089 | orchestrator | 2026-04-05 00:59:49.702095 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-05 00:59:49.702101 | orchestrator | Sunday 05 April 2026 00:58:35 +0000 (0:00:01.034) 0:06:42.851 ********** 2026-04-05 00:59:49.702107 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.702113 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.702119 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.702125 | orchestrator | 2026-04-05 00:59:49.702131 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-05 00:59:49.702137 | orchestrator | Sunday 05 April 2026 00:58:35 +0000 (0:00:00.479) 0:06:43.330 ********** 2026-04-05 00:59:49.702144 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.702150 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.702156 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.702162 | orchestrator | 2026-04-05 00:59:49.702168 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-05 00:59:49.702174 | orchestrator | Sunday 05 April 2026 00:58:37 +0000 (0:00:01.387) 0:06:44.718 ********** 2026-04-05 00:59:49.702180 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.702186 | orchestrator | 2026-04-05 00:59:49.702192 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-05 00:59:49.702198 | orchestrator | Sunday 05 April 2026 00:58:39 +0000 (0:00:01.931) 0:06:46.649 ********** 2026-04-05 00:59:49.702205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 00:59:49.702216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 00:59:49.702232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 00:59:49.702240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:59:49.702247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:59:49.702257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:59:49.702269 | orchestrator | 2026-04-05 00:59:49.702275 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-05 00:59:49.702282 | orchestrator | Sunday 05 April 2026 00:58:46 +0000 (0:00:06.900) 0:06:53.550 ********** 2026-04-05 00:59:49.702292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 00:59:49.702299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:59:49.702306 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.702316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 00:59:49.702327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:59:49.702334 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.702344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 00:59:49.702352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:59:49.702358 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.702364 | orchestrator | 2026-04-05 00:59:49.702370 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-05 00:59:49.702377 | orchestrator | Sunday 05 April 2026 00:58:46 +0000 (0:00:00.774) 0:06:54.325 ********** 2026-04-05 00:59:49.702384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:59:49.702391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:59:49.702405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.702413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.702419 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.702425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:59:49.702432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:59:49.702438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:59:49.702444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.702451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:59:49.702460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.702466 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.702473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.702479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:59:49.702486 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.702492 | orchestrator | 2026-04-05 00:59:49.702498 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-05 00:59:49.702504 | orchestrator | Sunday 05 April 2026 00:58:47 +0000 (0:00:01.007) 0:06:55.332 ********** 2026-04-05 00:59:49.702510 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.702516 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.702523 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.702529 | orchestrator | 2026-04-05 00:59:49.702535 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-05 00:59:49.702541 | orchestrator | Sunday 05 April 2026 00:58:49 +0000 (0:00:01.643) 0:06:56.975 ********** 2026-04-05 00:59:49.702551 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.702558 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.702564 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.702570 | orchestrator | 2026-04-05 00:59:49.702576 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-05 00:59:49.702582 | orchestrator | Sunday 05 April 2026 00:58:51 +0000 (0:00:02.321) 0:06:59.297 ********** 2026-04-05 00:59:49.702588 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.702594 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.702600 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.702606 | orchestrator | 2026-04-05 00:59:49.702613 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-05 00:59:49.702619 | orchestrator | Sunday 05 April 2026 00:58:52 +0000 (0:00:00.371) 0:06:59.669 ********** 2026-04-05 00:59:49.702625 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.702631 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.702639 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.702650 | orchestrator | 2026-04-05 00:59:49.702662 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-05 00:59:49.702672 | orchestrator | Sunday 05 April 2026 00:58:52 +0000 (0:00:00.333) 0:07:00.002 ********** 2026-04-05 00:59:49.702683 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.702695 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.702707 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.702718 | orchestrator | 2026-04-05 00:59:49.702733 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-05 00:59:49.702739 | orchestrator | Sunday 05 April 2026 00:58:52 +0000 (0:00:00.356) 0:07:00.359 ********** 2026-04-05 00:59:49.702745 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.702752 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.702758 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.702764 | orchestrator | 2026-04-05 00:59:49.702770 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-05 00:59:49.702776 | orchestrator | Sunday 05 April 2026 00:58:53 +0000 (0:00:00.676) 0:07:01.035 ********** 2026-04-05 00:59:49.702782 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.702788 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.702794 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.702800 | orchestrator | 2026-04-05 00:59:49.702806 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-05 00:59:49.702812 | orchestrator | Sunday 05 April 2026 00:58:53 +0000 (0:00:00.353) 0:07:01.389 ********** 2026-04-05 00:59:49.702818 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:49.702824 | orchestrator | 2026-04-05 00:59:49.702830 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-05 00:59:49.702836 | orchestrator | Sunday 05 April 2026 00:58:55 +0000 (0:00:01.891) 0:07:03.281 ********** 2026-04-05 00:59:49.702843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.702854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.702868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:59:49.702874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.702884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.702891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:59:49.702897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.702903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.702913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:59:49.702924 | orchestrator | 2026-04-05 00:59:49.702930 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-05 00:59:49.702936 | orchestrator | Sunday 05 April 2026 00:58:58 +0000 (0:00:02.455) 0:07:05.736 ********** 2026-04-05 00:59:49.702942 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:59:49.702949 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:59:49.702955 | orchestrator | } 2026-04-05 00:59:49.702961 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:59:49.702967 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:59:49.702973 | orchestrator | } 2026-04-05 00:59:49.702979 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:59:49.702985 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:59:49.702992 | orchestrator | } 2026-04-05 00:59:49.702998 | orchestrator | 2026-04-05 00:59:49.703004 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:59:49.703052 | orchestrator | Sunday 05 April 2026 00:58:58 +0000 (0:00:00.372) 0:07:06.109 ********** 2026-04-05 00:59:49.703059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.703066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.703076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.703083 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.703089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.703103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.703114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.703121 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.703126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:59:49.703132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:59:49.703141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:59:49.703147 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.703152 | orchestrator | 2026-04-05 00:59:49.703157 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-05 00:59:49.703163 | orchestrator | Sunday 05 April 2026 00:59:00 +0000 (0:00:01.725) 0:07:07.834 ********** 2026-04-05 00:59:49.703168 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.703174 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.703179 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.703185 | orchestrator | 2026-04-05 00:59:49.703190 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-05 00:59:49.703195 | orchestrator | Sunday 05 April 2026 00:59:01 +0000 (0:00:00.722) 0:07:08.557 ********** 2026-04-05 00:59:49.703200 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.703206 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.703211 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.703221 | orchestrator | 2026-04-05 00:59:49.703226 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-05 00:59:49.703232 | orchestrator | Sunday 05 April 2026 00:59:01 +0000 (0:00:00.381) 0:07:08.939 ********** 2026-04-05 00:59:49.703237 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.703243 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.703248 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.703253 | orchestrator | 2026-04-05 00:59:49.703258 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-05 00:59:49.703264 | orchestrator | Sunday 05 April 2026 00:59:02 +0000 (0:00:01.318) 0:07:10.258 ********** 2026-04-05 00:59:49.703269 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.703274 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.703280 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.703285 | orchestrator | 2026-04-05 00:59:49.703290 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-05 00:59:49.703295 | orchestrator | Sunday 05 April 2026 00:59:03 +0000 (0:00:00.964) 0:07:11.222 ********** 2026-04-05 00:59:49.703301 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.703306 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.703311 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.703317 | orchestrator | 2026-04-05 00:59:49.703322 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-05 00:59:49.703327 | orchestrator | Sunday 05 April 2026 00:59:04 +0000 (0:00:01.074) 0:07:12.297 ********** 2026-04-05 00:59:49.703333 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.703338 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.703344 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.703349 | orchestrator | 2026-04-05 00:59:49.703354 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-05 00:59:49.703359 | orchestrator | Sunday 05 April 2026 00:59:14 +0000 (0:00:10.110) 0:07:22.408 ********** 2026-04-05 00:59:49.703365 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.703370 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.703378 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.703384 | orchestrator | 2026-04-05 00:59:49.703389 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-05 00:59:49.703395 | orchestrator | Sunday 05 April 2026 00:59:16 +0000 (0:00:01.213) 0:07:23.621 ********** 2026-04-05 00:59:49.703400 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.703405 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.703411 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.703416 | orchestrator | 2026-04-05 00:59:49.703421 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-05 00:59:49.703427 | orchestrator | Sunday 05 April 2026 00:59:31 +0000 (0:00:15.000) 0:07:38.621 ********** 2026-04-05 00:59:49.703432 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.703438 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.703443 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.703448 | orchestrator | 2026-04-05 00:59:49.703454 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-05 00:59:49.703459 | orchestrator | Sunday 05 April 2026 00:59:31 +0000 (0:00:00.840) 0:07:39.462 ********** 2026-04-05 00:59:49.703464 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:49.703470 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:49.703475 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:49.703480 | orchestrator | 2026-04-05 00:59:49.703486 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-05 00:59:49.703491 | orchestrator | Sunday 05 April 2026 00:59:41 +0000 (0:00:09.971) 0:07:49.434 ********** 2026-04-05 00:59:49.703497 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.703502 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.703507 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.703513 | orchestrator | 2026-04-05 00:59:49.703518 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-05 00:59:49.703529 | orchestrator | Sunday 05 April 2026 00:59:42 +0000 (0:00:00.725) 0:07:50.159 ********** 2026-04-05 00:59:49.703534 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.703539 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.703545 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.703550 | orchestrator | 2026-04-05 00:59:49.703556 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-05 00:59:49.703561 | orchestrator | Sunday 05 April 2026 00:59:43 +0000 (0:00:00.368) 0:07:50.528 ********** 2026-04-05 00:59:49.703566 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.703572 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.703577 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.703583 | orchestrator | 2026-04-05 00:59:49.703588 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-05 00:59:49.703593 | orchestrator | Sunday 05 April 2026 00:59:43 +0000 (0:00:00.394) 0:07:50.923 ********** 2026-04-05 00:59:49.703599 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.703604 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.703610 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.703615 | orchestrator | 2026-04-05 00:59:49.703620 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-05 00:59:49.703626 | orchestrator | Sunday 05 April 2026 00:59:43 +0000 (0:00:00.383) 0:07:51.306 ********** 2026-04-05 00:59:49.703631 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.703636 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.703642 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.703647 | orchestrator | 2026-04-05 00:59:49.703656 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-05 00:59:49.703661 | orchestrator | Sunday 05 April 2026 00:59:44 +0000 (0:00:00.727) 0:07:52.034 ********** 2026-04-05 00:59:49.703667 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:49.703672 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:49.703677 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:49.703683 | orchestrator | 2026-04-05 00:59:49.703688 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-05 00:59:49.703693 | orchestrator | Sunday 05 April 2026 00:59:44 +0000 (0:00:00.382) 0:07:52.417 ********** 2026-04-05 00:59:49.703699 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.703704 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.703709 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.703715 | orchestrator | 2026-04-05 00:59:49.703720 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-05 00:59:49.703726 | orchestrator | Sunday 05 April 2026 00:59:45 +0000 (0:00:00.957) 0:07:53.374 ********** 2026-04-05 00:59:49.703731 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:49.703736 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:49.703742 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:49.703747 | orchestrator | 2026-04-05 00:59:49.703753 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:59:49.703758 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-05 00:59:49.703764 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-05 00:59:49.703770 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-05 00:59:49.703775 | orchestrator | 2026-04-05 00:59:49.703780 | orchestrator | 2026-04-05 00:59:49.703786 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:59:49.703791 | orchestrator | Sunday 05 April 2026 00:59:46 +0000 (0:00:00.891) 0:07:54.266 ********** 2026-04-05 00:59:49.703796 | orchestrator | =============================================================================== 2026-04-05 00:59:49.703807 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.00s 2026-04-05 00:59:49.703812 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.11s 2026-04-05 00:59:49.703817 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.97s 2026-04-05 00:59:49.703826 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 8.61s 2026-04-05 00:59:49.703831 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 7.87s 2026-04-05 00:59:49.703837 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 7.84s 2026-04-05 00:59:49.703842 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.90s 2026-04-05 00:59:49.703847 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.63s 2026-04-05 00:59:49.703853 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.48s 2026-04-05 00:59:49.703858 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 6.39s 2026-04-05 00:59:49.703863 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.28s 2026-04-05 00:59:49.703868 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.07s 2026-04-05 00:59:49.703874 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 5.66s 2026-04-05 00:59:49.703879 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.23s 2026-04-05 00:59:49.703884 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.21s 2026-04-05 00:59:49.703890 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 5.13s 2026-04-05 00:59:49.703895 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.07s 2026-04-05 00:59:49.703900 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 5.05s 2026-04-05 00:59:49.703906 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.95s 2026-04-05 00:59:49.703911 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.95s 2026-04-05 00:59:49.703916 | orchestrator | 2026-04-05 00:59:49 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 00:59:49.703922 | orchestrator | 2026-04-05 00:59:49 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 00:59:49.703927 | orchestrator | 2026-04-05 00:59:49 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:49.703933 | orchestrator | 2026-04-05 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:52.720366 | orchestrator | 2026-04-05 00:59:52 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 00:59:52.722774 | orchestrator | 2026-04-05 00:59:52 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 00:59:52.724932 | orchestrator | 2026-04-05 00:59:52 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:52.725045 | orchestrator | 2026-04-05 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:55.765862 | orchestrator | 2026-04-05 00:59:55 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 00:59:55.768138 | orchestrator | 2026-04-05 00:59:55 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 00:59:55.769255 | orchestrator | 2026-04-05 00:59:55 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:55.769291 | orchestrator | 2026-04-05 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:58.805487 | orchestrator | 2026-04-05 00:59:58 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 00:59:58.806296 | orchestrator | 2026-04-05 00:59:58 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 00:59:58.807220 | orchestrator | 2026-04-05 00:59:58 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 00:59:58.807254 | orchestrator | 2026-04-05 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:01.850979 | orchestrator | 2026-04-05 01:00:01 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:01.851768 | orchestrator | 2026-04-05 01:00:01 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:01.853194 | orchestrator | 2026-04-05 01:00:01 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 01:00:01.853246 | orchestrator | 2026-04-05 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:04.889982 | orchestrator | 2026-04-05 01:00:04 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:04.890948 | orchestrator | 2026-04-05 01:00:04 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:04.891682 | orchestrator | 2026-04-05 01:00:04 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 01:00:04.891712 | orchestrator | 2026-04-05 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:07.925626 | orchestrator | 2026-04-05 01:00:07 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:07.925800 | orchestrator | 2026-04-05 01:00:07 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:07.927731 | orchestrator | 2026-04-05 01:00:07 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 01:00:07.927756 | orchestrator | 2026-04-05 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:10.962677 | orchestrator | 2026-04-05 01:00:10 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:10.963163 | orchestrator | 2026-04-05 01:00:10 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:10.963752 | orchestrator | 2026-04-05 01:00:10 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 01:00:10.963767 | orchestrator | 2026-04-05 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:13.986111 | orchestrator | 2026-04-05 01:00:13 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:13.986388 | orchestrator | 2026-04-05 01:00:13 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:13.989305 | orchestrator | 2026-04-05 01:00:13 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 01:00:13.989422 | orchestrator | 2026-04-05 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:17.056422 | orchestrator | 2026-04-05 01:00:17 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:17.060133 | orchestrator | 2026-04-05 01:00:17 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:17.060495 | orchestrator | 2026-04-05 01:00:17 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 01:00:17.061057 | orchestrator | 2026-04-05 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:20.140042 | orchestrator | 2026-04-05 01:00:20 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:20.140188 | orchestrator | 2026-04-05 01:00:20 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:20.141885 | orchestrator | 2026-04-05 01:00:20 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 01:00:20.142082 | orchestrator | 2026-04-05 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:23.191382 | orchestrator | 2026-04-05 01:00:23 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:23.192801 | orchestrator | 2026-04-05 01:00:23 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:23.194274 | orchestrator | 2026-04-05 01:00:23 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 01:00:23.194771 | orchestrator | 2026-04-05 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:26.267460 | orchestrator | 2026-04-05 01:00:26 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:26.268305 | orchestrator | 2026-04-05 01:00:26 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:26.269911 | orchestrator | 2026-04-05 01:00:26 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 01:00:26.269952 | orchestrator | 2026-04-05 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:29.314413 | orchestrator | 2026-04-05 01:00:29 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:29.316387 | orchestrator | 2026-04-05 01:00:29 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:29.317806 | orchestrator | 2026-04-05 01:00:29 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state STARTED 2026-04-05 01:00:29.318384 | orchestrator | 2026-04-05 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:32.372238 | orchestrator | 2026-04-05 01:00:32 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:32.373933 | orchestrator | 2026-04-05 01:00:32 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:32.384127 | orchestrator | 2026-04-05 01:00:32 | INFO  | Task 213b66ba-da42-4dc4-ac12-27b3a63d9ff2 is in state SUCCESS 2026-04-05 01:00:32.387031 | orchestrator | 2026-04-05 01:00:32.387130 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 01:00:32.387145 | orchestrator | 2.16.14 2026-04-05 01:00:32.387155 | orchestrator | 2026-04-05 01:00:32.387164 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-05 01:00:32.387174 | orchestrator | 2026-04-05 01:00:32.387183 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 01:00:32.387192 | orchestrator | Sunday 05 April 2026 00:48:48 +0000 (0:00:01.130) 0:00:01.130 ********** 2026-04-05 01:00:32.387202 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.387213 | orchestrator | 2026-04-05 01:00:32.387221 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 01:00:32.387230 | orchestrator | Sunday 05 April 2026 00:48:50 +0000 (0:00:01.702) 0:00:02.833 ********** 2026-04-05 01:00:32.387239 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.387249 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.387257 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.387266 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.387274 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.387283 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.387291 | orchestrator | 2026-04-05 01:00:32.387300 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 01:00:32.387309 | orchestrator | Sunday 05 April 2026 00:48:52 +0000 (0:00:02.257) 0:00:05.090 ********** 2026-04-05 01:00:32.387317 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.387492 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.387506 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.387515 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.387526 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.387537 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.387547 | orchestrator | 2026-04-05 01:00:32.387928 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 01:00:32.387940 | orchestrator | Sunday 05 April 2026 00:48:53 +0000 (0:00:00.754) 0:00:05.844 ********** 2026-04-05 01:00:32.387949 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.387958 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.387966 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.387999 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.388008 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.388016 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.388025 | orchestrator | 2026-04-05 01:00:32.388033 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 01:00:32.388042 | orchestrator | Sunday 05 April 2026 00:48:54 +0000 (0:00:01.058) 0:00:06.903 ********** 2026-04-05 01:00:32.388051 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.388059 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.388069 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.388082 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.388096 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.388107 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.388116 | orchestrator | 2026-04-05 01:00:32.388124 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 01:00:32.388133 | orchestrator | Sunday 05 April 2026 00:48:55 +0000 (0:00:01.346) 0:00:08.250 ********** 2026-04-05 01:00:32.388142 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.388151 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.388159 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.388168 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.388176 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.388185 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.389010 | orchestrator | 2026-04-05 01:00:32.389056 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 01:00:32.389067 | orchestrator | Sunday 05 April 2026 00:48:56 +0000 (0:00:01.189) 0:00:09.439 ********** 2026-04-05 01:00:32.389076 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.389085 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.389093 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.389273 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.389291 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.390362 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.390389 | orchestrator | 2026-04-05 01:00:32.390402 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 01:00:32.390415 | orchestrator | Sunday 05 April 2026 00:48:58 +0000 (0:00:01.772) 0:00:11.212 ********** 2026-04-05 01:00:32.390427 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.390448 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.390466 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.390483 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.390501 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.390518 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.390533 | orchestrator | 2026-04-05 01:00:32.390551 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 01:00:32.390569 | orchestrator | Sunday 05 April 2026 00:48:59 +0000 (0:00:01.225) 0:00:12.439 ********** 2026-04-05 01:00:32.390588 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.390608 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.390626 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.390646 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.390664 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.390682 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.390700 | orchestrator | 2026-04-05 01:00:32.390758 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 01:00:32.390778 | orchestrator | Sunday 05 April 2026 00:49:01 +0000 (0:00:01.799) 0:00:14.238 ********** 2026-04-05 01:00:32.390794 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:32.390810 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:32.390827 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:32.390846 | orchestrator | 2026-04-05 01:00:32.390863 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 01:00:32.390881 | orchestrator | Sunday 05 April 2026 00:49:02 +0000 (0:00:00.985) 0:00:15.224 ********** 2026-04-05 01:00:32.390899 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.390918 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.390937 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.391239 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.391264 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.391275 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.391289 | orchestrator | 2026-04-05 01:00:32.391308 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 01:00:32.391327 | orchestrator | Sunday 05 April 2026 00:49:03 +0000 (0:00:01.431) 0:00:16.656 ********** 2026-04-05 01:00:32.391343 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:32.391360 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:32.391379 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:32.391397 | orchestrator | 2026-04-05 01:00:32.391416 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 01:00:32.391435 | orchestrator | Sunday 05 April 2026 00:49:06 +0000 (0:00:02.510) 0:00:19.166 ********** 2026-04-05 01:00:32.391447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 01:00:32.391459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 01:00:32.391494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 01:00:32.391505 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.391516 | orchestrator | 2026-04-05 01:00:32.391527 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 01:00:32.391537 | orchestrator | Sunday 05 April 2026 00:49:06 +0000 (0:00:00.454) 0:00:19.621 ********** 2026-04-05 01:00:32.391551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.391565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.391576 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.391587 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.391598 | orchestrator | 2026-04-05 01:00:32.391609 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 01:00:32.391620 | orchestrator | Sunday 05 April 2026 00:49:07 +0000 (0:00:00.897) 0:00:20.519 ********** 2026-04-05 01:00:32.391650 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.391681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.391693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.391704 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.391715 | orchestrator | 2026-04-05 01:00:32.391726 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 01:00:32.391737 | orchestrator | Sunday 05 April 2026 00:49:07 +0000 (0:00:00.153) 0:00:20.672 ********** 2026-04-05 01:00:32.391857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 00:49:04.554508', 'end': '2026-04-05 00:49:04.632572', 'delta': '0:00:00.078064', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.391883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 00:49:05.295453', 'end': '2026-04-05 00:49:05.410335', 'delta': '0:00:00.114882', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.391896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 00:49:06.210940', 'end': '2026-04-05 00:49:06.295835', 'delta': '0:00:00.084895', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.391907 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.391918 | orchestrator | 2026-04-05 01:00:32.391932 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 01:00:32.391951 | orchestrator | Sunday 05 April 2026 00:49:08 +0000 (0:00:00.692) 0:00:21.364 ********** 2026-04-05 01:00:32.391999 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.392018 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.392051 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.392069 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.392086 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.392105 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.392124 | orchestrator | 2026-04-05 01:00:32.392143 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 01:00:32.392161 | orchestrator | Sunday 05 April 2026 00:49:10 +0000 (0:00:01.772) 0:00:23.137 ********** 2026-04-05 01:00:32.392178 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:32.392198 | orchestrator | 2026-04-05 01:00:32.392218 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 01:00:32.392248 | orchestrator | Sunday 05 April 2026 00:49:11 +0000 (0:00:00.893) 0:00:24.030 ********** 2026-04-05 01:00:32.392269 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.392281 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.392291 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.392302 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.392313 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.392324 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.392334 | orchestrator | 2026-04-05 01:00:32.392346 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 01:00:32.392356 | orchestrator | Sunday 05 April 2026 00:49:12 +0000 (0:00:01.531) 0:00:25.562 ********** 2026-04-05 01:00:32.392367 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.392377 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.392388 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.392399 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.392410 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.392421 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.392431 | orchestrator | 2026-04-05 01:00:32.392442 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 01:00:32.392453 | orchestrator | Sunday 05 April 2026 00:49:16 +0000 (0:00:03.351) 0:00:28.914 ********** 2026-04-05 01:00:32.392466 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.392479 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.392491 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.392503 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.392516 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.392530 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.392542 | orchestrator | 2026-04-05 01:00:32.392555 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 01:00:32.392568 | orchestrator | Sunday 05 April 2026 00:49:19 +0000 (0:00:02.833) 0:00:31.747 ********** 2026-04-05 01:00:32.392581 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.392594 | orchestrator | 2026-04-05 01:00:32.392607 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 01:00:32.392620 | orchestrator | Sunday 05 April 2026 00:49:19 +0000 (0:00:00.343) 0:00:32.091 ********** 2026-04-05 01:00:32.392633 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.392724 | orchestrator | 2026-04-05 01:00:32.392739 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 01:00:32.392753 | orchestrator | Sunday 05 April 2026 00:49:19 +0000 (0:00:00.547) 0:00:32.638 ********** 2026-04-05 01:00:32.392767 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.392780 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.392792 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.392932 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.392948 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.392959 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.393017 | orchestrator | 2026-04-05 01:00:32.393038 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 01:00:32.393050 | orchestrator | Sunday 05 April 2026 00:49:21 +0000 (0:00:01.178) 0:00:33.817 ********** 2026-04-05 01:00:32.393074 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.393085 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.393096 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.393107 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.393118 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.393128 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.393139 | orchestrator | 2026-04-05 01:00:32.393150 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 01:00:32.393161 | orchestrator | Sunday 05 April 2026 00:49:22 +0000 (0:00:01.614) 0:00:35.431 ********** 2026-04-05 01:00:32.393172 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.393182 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.393193 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.393204 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.393215 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.393231 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.393250 | orchestrator | 2026-04-05 01:00:32.393269 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 01:00:32.393286 | orchestrator | Sunday 05 April 2026 00:49:23 +0000 (0:00:01.295) 0:00:36.726 ********** 2026-04-05 01:00:32.393306 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.393326 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.393345 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.393363 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.393383 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.393402 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.393421 | orchestrator | 2026-04-05 01:00:32.393433 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 01:00:32.393444 | orchestrator | Sunday 05 April 2026 00:49:25 +0000 (0:00:01.651) 0:00:38.378 ********** 2026-04-05 01:00:32.393454 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.393465 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.393476 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.393486 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.393497 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.393508 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.393518 | orchestrator | 2026-04-05 01:00:32.393531 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 01:00:32.393544 | orchestrator | Sunday 05 April 2026 00:49:26 +0000 (0:00:01.181) 0:00:39.560 ********** 2026-04-05 01:00:32.393556 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.393569 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.393582 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.393595 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.393608 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.393621 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.393634 | orchestrator | 2026-04-05 01:00:32.393647 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 01:00:32.393660 | orchestrator | Sunday 05 April 2026 00:49:27 +0000 (0:00:01.120) 0:00:40.680 ********** 2026-04-05 01:00:32.393673 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.393685 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.393698 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.393721 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.393734 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.393747 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.393761 | orchestrator | 2026-04-05 01:00:32.393774 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 01:00:32.393787 | orchestrator | Sunday 05 April 2026 00:49:28 +0000 (0:00:00.731) 0:00:41.411 ********** 2026-04-05 01:00:32.393802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd7e6aba--230a--5307--afd3--3b474950d4d0-osd--block--bd7e6aba--230a--5307--afd3--3b474950d4d0', 'dm-uuid-LVM-m1QlHxCsbxztU2FuOybrbqS7CBCT7wjEoNVQSmaG9N9pwN9NxAX2gf2DoZoQBSLW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.393827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffa9e237--b4c6--554d--9530--d8db42979c07-osd--block--ffa9e237--b4c6--554d--9530--d8db42979c07', 'dm-uuid-LVM-MPhbeREO53p8Jlrygb16JZjJdslDbKe9UFAHKtvpsM3Td0r3FZzHgndlgeccqD31'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.393947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.393965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part1', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part14', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part15', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part16', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.394295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bd7e6aba--230a--5307--afd3--3b474950d4d0-osd--block--bd7e6aba--230a--5307--afd3--3b474950d4d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g3afkn-oiK6-fbGy-ikDI-QGrc-Ke5t-Vng8th', 'scsi-0QEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772', 'scsi-SQEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.394313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ffa9e237--b4c6--554d--9530--d8db42979c07-osd--block--ffa9e237--b4c6--554d--9530--d8db42979c07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oEjU1j-7vo1-FBbZ-xQfu-XNze-tfrU-fzo2Hf', 'scsi-0QEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064', 'scsi-SQEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.394332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e', 'scsi-SQEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.394344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c330a934--8550--546d--8551--a9ce4f4a4f0f-osd--block--c330a934--8550--546d--8551--a9ce4f4a4f0f', 'dm-uuid-LVM-M5GW0XsaZBYOdi3LjwKFnXxM7dHZGYisyuj76tYmxE1IOZqmeCabtxDaQl51AQiT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.394438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--824ea9fd--8e44--5b08--9075--8333765a455e-osd--block--824ea9fd--8e44--5b08--9075--8333765a455e', 'dm-uuid-LVM-YQsQAY86Fx4ju4TNq2gKTp2qhyUkpD30NpWlR2Lj975POoLLl6xqkUcdSwvKaup1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3bb92c70--c222--5380--a7bf--d21f250fcd2a-osd--block--3bb92c70--c222--5380--a7bf--d21f250fcd2a', 'dm-uuid-LVM-Iwi0qyKjiGmMF5ursl1dLgDY0DpsldIbWEqgh6AVunI3t2Bgz9ffIVamVaOiYcdC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--377d1900--3c05--5c55--820b--3d4ba19b512c-osd--block--377d1900--3c05--5c55--820b--3d4ba19b512c', 'dm-uuid-LVM-KOpPIgP3YZPgrR5U1Alrp0YgUL65ze1aGCE4YLXLcRuVkn0cprnjm94w3OsBdDWy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part1', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part14', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part15', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part16', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.394917 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.394935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.394958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3bb92c70--c222--5380--a7bf--d21f250fcd2a-osd--block--3bb92c70--c222--5380--a7bf--d21f250fcd2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m5POgL-rOBp-YXYX-f3KV-nJ3H-4ca2-4TuzW5', 'scsi-0QEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2', 'scsi-SQEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.395016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--377d1900--3c05--5c55--820b--3d4ba19b512c-osd--block--377d1900--3c05--5c55--820b--3d4ba19b512c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1de6Ye-L2s7-EBhG-a0LS-PRvj-HatI-TsRBgx', 'scsi-0QEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba', 'scsi-SQEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.395036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2', 'scsi-SQEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.395158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.395174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.395188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.395206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.395243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.395359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c330a934--8550--546d--8551--a9ce4f4a4f0f-osd--block--c330a934--8550--546d--8551--a9ce4f4a4f0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IvGrtq-91Hy-Ua6w-dSHl-JVgq-dNiF-ZDVSPO', 'scsi-0QEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654', 'scsi-SQEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.395385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.395402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.395421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--824ea9fd--8e44--5b08--9075--8333765a455e-osd--block--824ea9fd--8e44--5b08--9075--8333765a455e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3VLkW2-HYKO-b9sH-FgVc-eGYL-BmyQ-VG6oGC', 'scsi-0QEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0', 'scsi-SQEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.395459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.395476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.395493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637', 'scsi-SQEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.395510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.395805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.395844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.395862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.395878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.395922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac', 'scsi-SQEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part1', 'scsi-SQEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part14', 'scsi-SQEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part15', 'scsi-SQEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part16', 'scsi-SQEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.396081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.396100 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.396112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00', 'scsi-SQEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part1', 'scsi-SQEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part14', 'scsi-SQEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part15', 'scsi-SQEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part16', 'scsi-SQEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.396306 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.396317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.396328 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.396338 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.396353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:32.396526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555', 'scsi-SQEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part1', 'scsi-SQEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part14', 'scsi-SQEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part15', 'scsi-SQEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part16', 'scsi-SQEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.396603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:32.396618 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.396627 | orchestrator | 2026-04-05 01:00:32.396638 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 01:00:32.396648 | orchestrator | Sunday 05 April 2026 00:49:30 +0000 (0:00:01.981) 0:00:43.393 ********** 2026-04-05 01:00:32.396667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd7e6aba--230a--5307--afd3--3b474950d4d0-osd--block--bd7e6aba--230a--5307--afd3--3b474950d4d0', 'dm-uuid-LVM-m1QlHxCsbxztU2FuOybrbqS7CBCT7wjEoNVQSmaG9N9pwN9NxAX2gf2DoZoQBSLW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.396680 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffa9e237--b4c6--554d--9530--d8db42979c07-osd--block--ffa9e237--b4c6--554d--9530--d8db42979c07', 'dm-uuid-LVM-MPhbeREO53p8Jlrygb16JZjJdslDbKe9UFAHKtvpsM3Td0r3FZzHgndlgeccqD31'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.396691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.396702 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.396712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.396824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.396862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.396882 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c330a934--8550--546d--8551--a9ce4f4a4f0f-osd--block--c330a934--8550--546d--8551--a9ce4f4a4f0f', 'dm-uuid-LVM-M5GW0XsaZBYOdi3LjwKFnXxM7dHZGYisyuj76tYmxE1IOZqmeCabtxDaQl51AQiT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.396906 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.396924 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--824ea9fd--8e44--5b08--9075--8333765a455e-osd--block--824ea9fd--8e44--5b08--9075--8333765a455e', 'dm-uuid-LVM-YQsQAY86Fx4ju4TNq2gKTp2qhyUkpD30NpWlR2Lj975POoLLl6xqkUcdSwvKaup1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.396940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397149 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397213 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397229 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part1', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part14', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part15', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part16', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397403 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bd7e6aba--230a--5307--afd3--3b474950d4d0-osd--block--bd7e6aba--230a--5307--afd3--3b474950d4d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g3afkn-oiK6-fbGy-ikDI-QGrc-Ke5t-Vng8th', 'scsi-0QEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772', 'scsi-SQEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397423 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ffa9e237--b4c6--554d--9530--d8db42979c07-osd--block--ffa9e237--b4c6--554d--9530--d8db42979c07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oEjU1j-7vo1-FBbZ-xQfu-XNze-tfrU-fzo2Hf', 'scsi-0QEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064', 'scsi-SQEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e', 'scsi-SQEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397570 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397580 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397590 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.397606 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397684 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397716 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c330a934--8550--546d--8551--a9ce4f4a4f0f-osd--block--c330a934--8550--546d--8551--a9ce4f4a4f0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IvGrtq-91Hy-Ua6w-dSHl-JVgq-dNiF-ZDVSPO', 'scsi-0QEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654', 'scsi-SQEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397736 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--824ea9fd--8e44--5b08--9075--8333765a455e-osd--block--824ea9fd--8e44--5b08--9075--8333765a455e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3VLkW2-HYKO-b9sH-FgVc-eGYL-BmyQ-VG6oGC', 'scsi-0QEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0', 'scsi-SQEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397776 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637', 'scsi-SQEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397884 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397894 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397903 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397917 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.397926 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3bb92c70--c222--5380--a7bf--d21f250fcd2a-osd--block--3bb92c70--c222--5380--a7bf--d21f250fcd2a', 'dm-uuid-LVM-Iwi0qyKjiGmMF5ursl1dLgDY0DpsldIbWEqgh6AVunI3t2Bgz9ffIVamVaOiYcdC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398141 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--377d1900--3c05--5c55--820b--3d4ba19b512c-osd--block--377d1900--3c05--5c55--820b--3d4ba19b512c', 'dm-uuid-LVM-KOpPIgP3YZPgrR5U1Alrp0YgUL65ze1aGCE4YLXLcRuVkn0cprnjm94w3OsBdDWy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398155 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398168 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398205 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.398219 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398246 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398355 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398371 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398389 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac', 'scsi-SQEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part1', 'scsi-SQEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part14', 'scsi-SQEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part15', 'scsi-SQEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part16', 'scsi-SQEMU_QEMU_HARDDISK_b4825ea0-ddc9-4dcd-98b7-2aee45b23bac-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398465 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398477 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398486 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398522 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398620 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part1', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part14', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part15', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part16', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398651 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3bb92c70--c222--5380--a7bf--d21f250fcd2a-osd--block--3bb92c70--c222--5380--a7bf--d21f250fcd2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m5POgL-rOBp-YXYX-f3KV-nJ3H-4ca2-4TuzW5', 'scsi-0QEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2', 'scsi-SQEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398665 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--377d1900--3c05--5c55--820b--3d4ba19b512c-osd--block--377d1900--3c05--5c55--820b--3d4ba19b512c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1de6Ye-L2s7-EBhG-a0LS-PRvj-HatI-TsRBgx', 'scsi-0QEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba', 'scsi-SQEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398674 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2', 'scsi-SQEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398774 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398795 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398812 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398825 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398845 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398860 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.398886 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399007 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399028 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399051 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00', 'scsi-SQEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part1', 'scsi-SQEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part14', 'scsi-SQEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part15', 'scsi-SQEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part16', 'scsi-SQEMU_QEMU_HARDDISK_77f7e0f2-85c8-48ef-ab3c-0b23e9070d00-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399077 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399189 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.399209 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.399222 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.399237 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399250 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399263 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399286 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399311 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399325 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399435 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399455 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399483 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555', 'scsi-SQEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part1', 'scsi-SQEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part14', 'scsi-SQEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part15', 'scsi-SQEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part16', 'scsi-SQEMU_QEMU_HARDDISK_3ccf04e2-60ac-4e1d-9501-51c6c11a3555-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399511 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:32.399525 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.399540 | orchestrator | 2026-04-05 01:00:32.399641 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 01:00:32.399663 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:02.038) 0:00:45.431 ********** 2026-04-05 01:00:32.399676 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.399689 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.399703 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.399716 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.399729 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.399743 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.399756 | orchestrator | 2026-04-05 01:00:32.399769 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 01:00:32.399782 | orchestrator | Sunday 05 April 2026 00:49:35 +0000 (0:00:02.386) 0:00:47.818 ********** 2026-04-05 01:00:32.399795 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.399808 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.399818 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.399826 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.399834 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.399842 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.399850 | orchestrator | 2026-04-05 01:00:32.399858 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 01:00:32.399866 | orchestrator | Sunday 05 April 2026 00:49:36 +0000 (0:00:01.627) 0:00:49.445 ********** 2026-04-05 01:00:32.399874 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.399882 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.399890 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.399898 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.399906 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.399913 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.399921 | orchestrator | 2026-04-05 01:00:32.399929 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 01:00:32.399937 | orchestrator | Sunday 05 April 2026 00:49:38 +0000 (0:00:02.225) 0:00:51.671 ********** 2026-04-05 01:00:32.399945 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.399963 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.400039 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.400049 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.400056 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.400064 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.400072 | orchestrator | 2026-04-05 01:00:32.400080 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 01:00:32.400089 | orchestrator | Sunday 05 April 2026 00:49:40 +0000 (0:00:01.151) 0:00:52.822 ********** 2026-04-05 01:00:32.400097 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.400105 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.400112 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.400120 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.400128 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.400136 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.400144 | orchestrator | 2026-04-05 01:00:32.400152 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 01:00:32.400164 | orchestrator | Sunday 05 April 2026 00:49:42 +0000 (0:00:02.533) 0:00:55.356 ********** 2026-04-05 01:00:32.400177 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.400191 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.400204 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.400216 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.400229 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.400242 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.400255 | orchestrator | 2026-04-05 01:00:32.400267 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 01:00:32.400291 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:00:01.600) 0:00:56.957 ********** 2026-04-05 01:00:32.400305 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 01:00:32.400321 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 01:00:32.400335 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 01:00:32.400350 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 01:00:32.400363 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 01:00:32.400378 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 01:00:32.400388 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 01:00:32.400397 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 01:00:32.400407 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 01:00:32.400416 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-05 01:00:32.400425 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 01:00:32.400435 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 01:00:32.400444 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-05 01:00:32.400453 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-05 01:00:32.400462 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 01:00:32.400471 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 01:00:32.400480 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 01:00:32.400489 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-05 01:00:32.400498 | orchestrator | 2026-04-05 01:00:32.400507 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 01:00:32.400516 | orchestrator | Sunday 05 April 2026 00:49:51 +0000 (0:00:06.852) 0:01:03.810 ********** 2026-04-05 01:00:32.400525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 01:00:32.400536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 01:00:32.400545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 01:00:32.400554 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.400562 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 01:00:32.400578 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 01:00:32.400586 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 01:00:32.400594 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 01:00:32.400645 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 01:00:32.400653 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 01:00:32.400660 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.400667 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 01:00:32.400674 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 01:00:32.400681 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 01:00:32.400688 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.400694 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.400701 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 01:00:32.400708 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 01:00:32.400715 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 01:00:32.400722 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.400728 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 01:00:32.400735 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 01:00:32.400742 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 01:00:32.400749 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.400756 | orchestrator | 2026-04-05 01:00:32.400763 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 01:00:32.400769 | orchestrator | Sunday 05 April 2026 00:49:52 +0000 (0:00:01.700) 0:01:05.510 ********** 2026-04-05 01:00:32.400776 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.400783 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.400790 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.400797 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.400804 | orchestrator | 2026-04-05 01:00:32.400811 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 01:00:32.400819 | orchestrator | Sunday 05 April 2026 00:49:54 +0000 (0:00:02.024) 0:01:07.535 ********** 2026-04-05 01:00:32.400826 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.400833 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.400839 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.400846 | orchestrator | 2026-04-05 01:00:32.400853 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 01:00:32.400860 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:00.429) 0:01:07.964 ********** 2026-04-05 01:00:32.400866 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.400873 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.400880 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.400886 | orchestrator | 2026-04-05 01:00:32.400893 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 01:00:32.400900 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:00.496) 0:01:08.461 ********** 2026-04-05 01:00:32.400907 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.400913 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.400920 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.400927 | orchestrator | 2026-04-05 01:00:32.400934 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 01:00:32.400940 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.701) 0:01:09.162 ********** 2026-04-05 01:00:32.400948 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.400960 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.400990 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.400998 | orchestrator | 2026-04-05 01:00:32.401005 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 01:00:32.401012 | orchestrator | Sunday 05 April 2026 00:49:57 +0000 (0:00:00.615) 0:01:09.778 ********** 2026-04-05 01:00:32.401019 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.401025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.401032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.401039 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.401045 | orchestrator | 2026-04-05 01:00:32.401052 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 01:00:32.401059 | orchestrator | Sunday 05 April 2026 00:49:57 +0000 (0:00:00.489) 0:01:10.268 ********** 2026-04-05 01:00:32.401065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.401072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.401079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.401086 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.401092 | orchestrator | 2026-04-05 01:00:32.401099 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 01:00:32.401106 | orchestrator | Sunday 05 April 2026 00:49:57 +0000 (0:00:00.417) 0:01:10.685 ********** 2026-04-05 01:00:32.401112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.401119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.401126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.401133 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.401139 | orchestrator | 2026-04-05 01:00:32.401146 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 01:00:32.401152 | orchestrator | Sunday 05 April 2026 00:49:58 +0000 (0:00:00.570) 0:01:11.256 ********** 2026-04-05 01:00:32.401159 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.401165 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.401175 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.401186 | orchestrator | 2026-04-05 01:00:32.401197 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 01:00:32.401208 | orchestrator | Sunday 05 April 2026 00:49:58 +0000 (0:00:00.462) 0:01:11.719 ********** 2026-04-05 01:00:32.401219 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 01:00:32.401230 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 01:00:32.401274 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 01:00:32.401287 | orchestrator | 2026-04-05 01:00:32.401296 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 01:00:32.401307 | orchestrator | Sunday 05 April 2026 00:50:00 +0000 (0:00:01.253) 0:01:12.972 ********** 2026-04-05 01:00:32.401316 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:32.401328 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:32.401339 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:32.401350 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 01:00:32.401362 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 01:00:32.401373 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 01:00:32.401385 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 01:00:32.401397 | orchestrator | 2026-04-05 01:00:32.401404 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 01:00:32.401410 | orchestrator | Sunday 05 April 2026 00:50:01 +0000 (0:00:00.964) 0:01:13.937 ********** 2026-04-05 01:00:32.401417 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:32.401431 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:32.401438 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:32.401444 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 01:00:32.401451 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 01:00:32.401458 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 01:00:32.401464 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 01:00:32.401471 | orchestrator | 2026-04-05 01:00:32.401477 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:32.401484 | orchestrator | Sunday 05 April 2026 00:50:03 +0000 (0:00:02.387) 0:01:16.324 ********** 2026-04-05 01:00:32.401492 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.401500 | orchestrator | 2026-04-05 01:00:32.401507 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:32.401514 | orchestrator | Sunday 05 April 2026 00:50:05 +0000 (0:00:01.645) 0:01:17.969 ********** 2026-04-05 01:00:32.401520 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.401527 | orchestrator | 2026-04-05 01:00:32.401534 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:32.401546 | orchestrator | Sunday 05 April 2026 00:50:07 +0000 (0:00:01.867) 0:01:19.837 ********** 2026-04-05 01:00:32.401553 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.401560 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.401566 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.401573 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.401579 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.401586 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.401593 | orchestrator | 2026-04-05 01:00:32.401599 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:32.401606 | orchestrator | Sunday 05 April 2026 00:50:08 +0000 (0:00:01.692) 0:01:21.530 ********** 2026-04-05 01:00:32.401613 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.401619 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.401626 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.401633 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.401639 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.401646 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.401652 | orchestrator | 2026-04-05 01:00:32.401659 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:32.401666 | orchestrator | Sunday 05 April 2026 00:50:09 +0000 (0:00:00.772) 0:01:22.302 ********** 2026-04-05 01:00:32.401673 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.401679 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.401686 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.401693 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.401699 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.401706 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.401713 | orchestrator | 2026-04-05 01:00:32.401719 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:32.401726 | orchestrator | Sunday 05 April 2026 00:50:10 +0000 (0:00:00.853) 0:01:23.155 ********** 2026-04-05 01:00:32.401733 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.401740 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.401746 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.401753 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.401765 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.401771 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.401778 | orchestrator | 2026-04-05 01:00:32.401784 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:32.401791 | orchestrator | Sunday 05 April 2026 00:50:11 +0000 (0:00:00.740) 0:01:23.896 ********** 2026-04-05 01:00:32.401798 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.401805 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.401811 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.401818 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.401825 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.401857 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.401865 | orchestrator | 2026-04-05 01:00:32.401872 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:32.401878 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:01.227) 0:01:25.123 ********** 2026-04-05 01:00:32.401885 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.401891 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.401898 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.401905 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.401911 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.401918 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.401924 | orchestrator | 2026-04-05 01:00:32.401931 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:32.401938 | orchestrator | Sunday 05 April 2026 00:50:13 +0000 (0:00:00.649) 0:01:25.772 ********** 2026-04-05 01:00:32.401944 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.401951 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.401957 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.401964 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.401996 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.402008 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.402096 | orchestrator | 2026-04-05 01:00:32.402108 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:32.402120 | orchestrator | Sunday 05 April 2026 00:50:13 +0000 (0:00:00.787) 0:01:26.559 ********** 2026-04-05 01:00:32.402132 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.402143 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.402155 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.402162 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.402169 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.402175 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.402182 | orchestrator | 2026-04-05 01:00:32.402188 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:32.402195 | orchestrator | Sunday 05 April 2026 00:50:16 +0000 (0:00:02.324) 0:01:28.883 ********** 2026-04-05 01:00:32.402202 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.402208 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.402215 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.402222 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.402233 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.402243 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.402253 | orchestrator | 2026-04-05 01:00:32.402263 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:32.402275 | orchestrator | Sunday 05 April 2026 00:50:17 +0000 (0:00:01.607) 0:01:30.491 ********** 2026-04-05 01:00:32.402282 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.402289 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.402296 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.402302 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.402309 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.402315 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.402322 | orchestrator | 2026-04-05 01:00:32.402329 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:32.402344 | orchestrator | Sunday 05 April 2026 00:50:19 +0000 (0:00:01.906) 0:01:32.398 ********** 2026-04-05 01:00:32.402351 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.402358 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.402364 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.402371 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.402379 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.402390 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.402401 | orchestrator | 2026-04-05 01:00:32.402419 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:32.402430 | orchestrator | Sunday 05 April 2026 00:50:21 +0000 (0:00:01.607) 0:01:34.005 ********** 2026-04-05 01:00:32.402439 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.402450 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.402460 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.402469 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.402479 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.402488 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.402497 | orchestrator | 2026-04-05 01:00:32.402508 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:32.402520 | orchestrator | Sunday 05 April 2026 00:50:22 +0000 (0:00:01.616) 0:01:35.621 ********** 2026-04-05 01:00:32.402531 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.402542 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.402554 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.402565 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.402576 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.402588 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.402595 | orchestrator | 2026-04-05 01:00:32.402601 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:32.402608 | orchestrator | Sunday 05 April 2026 00:50:23 +0000 (0:00:01.023) 0:01:36.645 ********** 2026-04-05 01:00:32.402615 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.402622 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.402628 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.402636 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.402648 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.402659 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.402671 | orchestrator | 2026-04-05 01:00:32.402682 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:32.402692 | orchestrator | Sunday 05 April 2026 00:50:26 +0000 (0:00:02.199) 0:01:38.844 ********** 2026-04-05 01:00:32.402703 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.402714 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.402726 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.402737 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.402748 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.402759 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.402771 | orchestrator | 2026-04-05 01:00:32.402778 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:32.402784 | orchestrator | Sunday 05 April 2026 00:50:27 +0000 (0:00:01.136) 0:01:39.980 ********** 2026-04-05 01:00:32.402791 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.402797 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.402804 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.402811 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.402891 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.402902 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.402909 | orchestrator | 2026-04-05 01:00:32.402915 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:32.402922 | orchestrator | Sunday 05 April 2026 00:50:28 +0000 (0:00:01.469) 0:01:41.450 ********** 2026-04-05 01:00:32.402929 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.402935 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.402942 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.402957 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.402964 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.403001 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.403008 | orchestrator | 2026-04-05 01:00:32.403015 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:32.403022 | orchestrator | Sunday 05 April 2026 00:50:29 +0000 (0:00:01.096) 0:01:42.547 ********** 2026-04-05 01:00:32.403029 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.403036 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.403042 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.403049 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.403056 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.403062 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.403070 | orchestrator | 2026-04-05 01:00:32.403077 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:32.403083 | orchestrator | Sunday 05 April 2026 00:50:31 +0000 (0:00:01.627) 0:01:44.174 ********** 2026-04-05 01:00:32.403090 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.403097 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.403103 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.403110 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.403117 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.403124 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.403130 | orchestrator | 2026-04-05 01:00:32.403137 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 01:00:32.403149 | orchestrator | Sunday 05 April 2026 00:50:33 +0000 (0:00:02.200) 0:01:46.374 ********** 2026-04-05 01:00:32.403160 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.403170 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.403180 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.403190 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.403200 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.403211 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.403222 | orchestrator | 2026-04-05 01:00:32.403232 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 01:00:32.403242 | orchestrator | Sunday 05 April 2026 00:50:35 +0000 (0:00:02.031) 0:01:48.406 ********** 2026-04-05 01:00:32.403253 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.403263 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.403273 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.403284 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.403295 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.403305 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.403316 | orchestrator | 2026-04-05 01:00:32.403328 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 01:00:32.403339 | orchestrator | Sunday 05 April 2026 00:50:38 +0000 (0:00:03.221) 0:01:51.627 ********** 2026-04-05 01:00:32.403351 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.403361 | orchestrator | 2026-04-05 01:00:32.403375 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 01:00:32.403383 | orchestrator | Sunday 05 April 2026 00:50:40 +0000 (0:00:01.332) 0:01:52.960 ********** 2026-04-05 01:00:32.403389 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.403396 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.403403 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.403409 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.403416 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.403423 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.403429 | orchestrator | 2026-04-05 01:00:32.403436 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 01:00:32.403442 | orchestrator | Sunday 05 April 2026 00:50:40 +0000 (0:00:00.681) 0:01:53.641 ********** 2026-04-05 01:00:32.403457 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.403463 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.403470 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.403477 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.403484 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.403490 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.403497 | orchestrator | 2026-04-05 01:00:32.403503 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 01:00:32.403510 | orchestrator | Sunday 05 April 2026 00:50:41 +0000 (0:00:00.938) 0:01:54.579 ********** 2026-04-05 01:00:32.403517 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:32.403524 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:32.403530 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:32.403537 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:32.403544 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:32.403550 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:32.403557 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:32.403564 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:32.403571 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:32.403577 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:32.403632 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:32.403645 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:32.403656 | orchestrator | 2026-04-05 01:00:32.403667 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 01:00:32.403678 | orchestrator | Sunday 05 April 2026 00:50:43 +0000 (0:00:01.448) 0:01:56.027 ********** 2026-04-05 01:00:32.403689 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.403700 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.403712 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.403723 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.403733 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.403746 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.403753 | orchestrator | 2026-04-05 01:00:32.403760 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 01:00:32.403767 | orchestrator | Sunday 05 April 2026 00:50:44 +0000 (0:00:01.592) 0:01:57.620 ********** 2026-04-05 01:00:32.403773 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.403784 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.403794 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.403805 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.403815 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.403825 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.403835 | orchestrator | 2026-04-05 01:00:32.403845 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 01:00:32.403856 | orchestrator | Sunday 05 April 2026 00:50:45 +0000 (0:00:00.758) 0:01:58.378 ********** 2026-04-05 01:00:32.403868 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.403878 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.403890 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.403901 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.403913 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.403924 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.403935 | orchestrator | 2026-04-05 01:00:32.403947 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 01:00:32.403995 | orchestrator | Sunday 05 April 2026 00:50:46 +0000 (0:00:00.775) 0:01:59.153 ********** 2026-04-05 01:00:32.404008 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.404019 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.404031 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.404042 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.404053 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.404064 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.404076 | orchestrator | 2026-04-05 01:00:32.404087 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 01:00:32.404098 | orchestrator | Sunday 05 April 2026 00:50:47 +0000 (0:00:00.589) 0:01:59.743 ********** 2026-04-05 01:00:32.404111 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.404122 | orchestrator | 2026-04-05 01:00:32.404133 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 01:00:32.404143 | orchestrator | Sunday 05 April 2026 00:50:48 +0000 (0:00:01.303) 0:02:01.047 ********** 2026-04-05 01:00:32.404150 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.404157 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.404163 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.404177 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.404184 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.404191 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.404197 | orchestrator | 2026-04-05 01:00:32.404205 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 01:00:32.404212 | orchestrator | Sunday 05 April 2026 00:51:56 +0000 (0:01:08.285) 0:03:09.332 ********** 2026-04-05 01:00:32.404219 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:32.404226 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:32.404233 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:32.404239 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.404246 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:32.404253 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:32.404260 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:32.404266 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:32.404273 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:32.404280 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:32.404286 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.404293 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:32.404300 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:32.404306 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:32.404313 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.404320 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.404326 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:32.404333 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:32.404339 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:32.404346 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.404393 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:32.404401 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:32.404419 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:32.404431 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.404438 | orchestrator | 2026-04-05 01:00:32.404445 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 01:00:32.404452 | orchestrator | Sunday 05 April 2026 00:51:57 +0000 (0:00:01.177) 0:03:10.510 ********** 2026-04-05 01:00:32.404458 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.404465 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.404471 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.404478 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.404485 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.404491 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.404498 | orchestrator | 2026-04-05 01:00:32.404505 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 01:00:32.404513 | orchestrator | Sunday 05 April 2026 00:51:59 +0000 (0:00:01.363) 0:03:11.873 ********** 2026-04-05 01:00:32.404525 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.404532 | orchestrator | 2026-04-05 01:00:32.404538 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 01:00:32.404545 | orchestrator | Sunday 05 April 2026 00:51:59 +0000 (0:00:00.173) 0:03:12.047 ********** 2026-04-05 01:00:32.404552 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.404558 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.404565 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.404572 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.404578 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.404585 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.404591 | orchestrator | 2026-04-05 01:00:32.404598 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 01:00:32.404605 | orchestrator | Sunday 05 April 2026 00:52:00 +0000 (0:00:00.927) 0:03:12.975 ********** 2026-04-05 01:00:32.404611 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.404618 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.404624 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.404631 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.404638 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.404644 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.404651 | orchestrator | 2026-04-05 01:00:32.404658 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 01:00:32.404664 | orchestrator | Sunday 05 April 2026 00:52:01 +0000 (0:00:01.446) 0:03:14.421 ********** 2026-04-05 01:00:32.404671 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.404678 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.404684 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.404691 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.404697 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.404704 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.404711 | orchestrator | 2026-04-05 01:00:32.404718 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 01:00:32.404727 | orchestrator | Sunday 05 April 2026 00:52:02 +0000 (0:00:00.995) 0:03:15.417 ********** 2026-04-05 01:00:32.404738 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.404749 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.404760 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.404770 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.404781 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.404799 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.404811 | orchestrator | 2026-04-05 01:00:32.404823 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 01:00:32.404834 | orchestrator | Sunday 05 April 2026 00:52:05 +0000 (0:00:02.978) 0:03:18.395 ********** 2026-04-05 01:00:32.404844 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.404862 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.404873 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.404885 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.404896 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.404908 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.404919 | orchestrator | 2026-04-05 01:00:32.404931 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 01:00:32.404943 | orchestrator | Sunday 05 April 2026 00:52:07 +0000 (0:00:01.387) 0:03:19.783 ********** 2026-04-05 01:00:32.404952 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.404960 | orchestrator | 2026-04-05 01:00:32.405022 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 01:00:32.405032 | orchestrator | Sunday 05 April 2026 00:52:08 +0000 (0:00:01.592) 0:03:21.375 ********** 2026-04-05 01:00:32.405039 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.405046 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.405053 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.405060 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.405066 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.405073 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.405079 | orchestrator | 2026-04-05 01:00:32.405086 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 01:00:32.405093 | orchestrator | Sunday 05 April 2026 00:52:09 +0000 (0:00:01.231) 0:03:22.606 ********** 2026-04-05 01:00:32.405099 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.405106 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.405113 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.405119 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.405126 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.405132 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.405139 | orchestrator | 2026-04-05 01:00:32.405146 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 01:00:32.405152 | orchestrator | Sunday 05 April 2026 00:52:11 +0000 (0:00:01.201) 0:03:23.807 ********** 2026-04-05 01:00:32.405159 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.405166 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.405209 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.405217 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.405225 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.405237 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.405248 | orchestrator | 2026-04-05 01:00:32.405259 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 01:00:32.405269 | orchestrator | Sunday 05 April 2026 00:52:12 +0000 (0:00:01.151) 0:03:24.959 ********** 2026-04-05 01:00:32.405280 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.405290 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.405300 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.405311 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.405322 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.405332 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.405344 | orchestrator | 2026-04-05 01:00:32.405356 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 01:00:32.405367 | orchestrator | Sunday 05 April 2026 00:52:13 +0000 (0:00:01.329) 0:03:26.289 ********** 2026-04-05 01:00:32.405376 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.405382 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.405388 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.405394 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.405400 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.405406 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.405412 | orchestrator | 2026-04-05 01:00:32.405419 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 01:00:32.405433 | orchestrator | Sunday 05 April 2026 00:52:14 +0000 (0:00:00.765) 0:03:27.055 ********** 2026-04-05 01:00:32.405439 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.405445 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.405451 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.405458 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.405464 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.405470 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.405476 | orchestrator | 2026-04-05 01:00:32.405482 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 01:00:32.405488 | orchestrator | Sunday 05 April 2026 00:52:15 +0000 (0:00:01.248) 0:03:28.303 ********** 2026-04-05 01:00:32.405494 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.405500 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.405506 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.405512 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.405519 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.405525 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.405531 | orchestrator | 2026-04-05 01:00:32.405537 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 01:00:32.405545 | orchestrator | Sunday 05 April 2026 00:52:16 +0000 (0:00:00.843) 0:03:29.146 ********** 2026-04-05 01:00:32.405556 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.405563 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.405569 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.405575 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.405581 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.405587 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.405593 | orchestrator | 2026-04-05 01:00:32.405604 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 01:00:32.405612 | orchestrator | Sunday 05 April 2026 00:52:17 +0000 (0:00:01.275) 0:03:30.422 ********** 2026-04-05 01:00:32.405618 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.405624 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.405631 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.405643 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.405649 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.405655 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.405661 | orchestrator | 2026-04-05 01:00:32.405668 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 01:00:32.405674 | orchestrator | Sunday 05 April 2026 00:52:19 +0000 (0:00:01.488) 0:03:31.910 ********** 2026-04-05 01:00:32.405681 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.405688 | orchestrator | 2026-04-05 01:00:32.405694 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 01:00:32.405701 | orchestrator | Sunday 05 April 2026 00:52:20 +0000 (0:00:01.677) 0:03:33.587 ********** 2026-04-05 01:00:32.405707 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-05 01:00:32.405713 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-05 01:00:32.405719 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-05 01:00:32.405726 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-05 01:00:32.405732 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-05 01:00:32.405738 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-05 01:00:32.405744 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:32.405752 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-05 01:00:32.405763 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-05 01:00:32.405773 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-05 01:00:32.405783 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-05 01:00:32.405799 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:32.405809 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:32.405819 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:32.405830 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-05 01:00:32.405841 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:32.405852 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-05 01:00:32.405862 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:32.405903 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:32.405911 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:32.405917 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:32.405923 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:32.405929 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:32.405936 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:32.405942 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:32.405948 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:32.405954 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:32.405960 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:32.405966 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:32.405992 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:32.405999 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:32.406005 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:32.406012 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:32.406047 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:32.406053 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:32.406060 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:32.406066 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:32.406072 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:32.406079 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:32.406085 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:32.406092 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:32.406098 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:32.406104 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:32.406111 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:32.406117 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:32.406123 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:32.406129 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:32.406135 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:32.406142 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:32.406148 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:32.406154 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:32.406160 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:32.406166 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:32.406184 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:32.406190 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:32.406196 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:32.406202 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:32.406209 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:32.406215 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:32.406221 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:32.406227 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:32.406233 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:32.406239 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:32.406245 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:32.406252 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:32.406258 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:32.406264 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:32.406270 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:32.406276 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:32.406282 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:32.406289 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:32.406295 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:32.406301 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:32.406307 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:32.406313 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:32.406319 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:32.406348 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:32.406355 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-05 01:00:32.406362 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:32.406368 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:32.406374 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:32.406380 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:32.406387 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-05 01:00:32.406393 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-05 01:00:32.406400 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-05 01:00:32.406406 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:32.406412 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-05 01:00:32.406419 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:32.406425 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-05 01:00:32.406431 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-05 01:00:32.406437 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-05 01:00:32.406444 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:32.406450 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-05 01:00:32.406456 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-05 01:00:32.406469 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-05 01:00:32.406475 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-05 01:00:32.406482 | orchestrator | 2026-04-05 01:00:32.406488 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 01:00:32.406494 | orchestrator | Sunday 05 April 2026 00:52:28 +0000 (0:00:07.560) 0:03:41.148 ********** 2026-04-05 01:00:32.406501 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.406507 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.406513 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.406520 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.406526 | orchestrator | 2026-04-05 01:00:32.406532 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 01:00:32.406539 | orchestrator | Sunday 05 April 2026 00:52:29 +0000 (0:00:01.565) 0:03:42.714 ********** 2026-04-05 01:00:32.406545 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.406552 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.406558 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.406565 | orchestrator | 2026-04-05 01:00:32.406575 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 01:00:32.406581 | orchestrator | Sunday 05 April 2026 00:52:31 +0000 (0:00:01.061) 0:03:43.775 ********** 2026-04-05 01:00:32.406588 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.406594 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.406600 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.406606 | orchestrator | 2026-04-05 01:00:32.406613 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 01:00:32.406619 | orchestrator | Sunday 05 April 2026 00:52:33 +0000 (0:00:02.000) 0:03:45.775 ********** 2026-04-05 01:00:32.406625 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.406646 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.406653 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.406659 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.406665 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.406671 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.406678 | orchestrator | 2026-04-05 01:00:32.406684 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 01:00:32.406690 | orchestrator | Sunday 05 April 2026 00:52:33 +0000 (0:00:00.941) 0:03:46.717 ********** 2026-04-05 01:00:32.406697 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.406703 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.406709 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.406715 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.406722 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.406728 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.406734 | orchestrator | 2026-04-05 01:00:32.406741 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 01:00:32.406747 | orchestrator | Sunday 05 April 2026 00:52:35 +0000 (0:00:01.315) 0:03:48.033 ********** 2026-04-05 01:00:32.406753 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.406760 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.406766 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.406777 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.406783 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.406790 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.406796 | orchestrator | 2026-04-05 01:00:32.406823 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 01:00:32.406831 | orchestrator | Sunday 05 April 2026 00:52:36 +0000 (0:00:00.823) 0:03:48.856 ********** 2026-04-05 01:00:32.406837 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.406843 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.406849 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.406855 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.406864 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.406874 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.406886 | orchestrator | 2026-04-05 01:00:32.406896 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 01:00:32.406908 | orchestrator | Sunday 05 April 2026 00:52:36 +0000 (0:00:00.826) 0:03:49.683 ********** 2026-04-05 01:00:32.406919 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.406930 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.406941 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.406952 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.406962 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.406988 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.406999 | orchestrator | 2026-04-05 01:00:32.407009 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 01:00:32.407020 | orchestrator | Sunday 05 April 2026 00:52:38 +0000 (0:00:01.184) 0:03:50.867 ********** 2026-04-05 01:00:32.407031 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.407040 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.407050 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.407059 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407070 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407080 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407091 | orchestrator | 2026-04-05 01:00:32.407102 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 01:00:32.407113 | orchestrator | Sunday 05 April 2026 00:52:39 +0000 (0:00:01.016) 0:03:51.883 ********** 2026-04-05 01:00:32.407124 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.407134 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.407144 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.407155 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407162 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407168 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407174 | orchestrator | 2026-04-05 01:00:32.407180 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 01:00:32.407187 | orchestrator | Sunday 05 April 2026 00:52:40 +0000 (0:00:01.631) 0:03:53.515 ********** 2026-04-05 01:00:32.407193 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.407199 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.407205 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.407211 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407217 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407223 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407229 | orchestrator | 2026-04-05 01:00:32.407235 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 01:00:32.407241 | orchestrator | Sunday 05 April 2026 00:52:41 +0000 (0:00:00.848) 0:03:54.364 ********** 2026-04-05 01:00:32.407248 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407254 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407260 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407266 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.407272 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.407292 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.407299 | orchestrator | 2026-04-05 01:00:32.407305 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 01:00:32.407311 | orchestrator | Sunday 05 April 2026 00:52:44 +0000 (0:00:02.607) 0:03:56.971 ********** 2026-04-05 01:00:32.407317 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.407323 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.407329 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.407335 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407342 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407348 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407354 | orchestrator | 2026-04-05 01:00:32.407360 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 01:00:32.407366 | orchestrator | Sunday 05 April 2026 00:52:45 +0000 (0:00:00.853) 0:03:57.825 ********** 2026-04-05 01:00:32.407372 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.407378 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.407384 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407390 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.407397 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407402 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407409 | orchestrator | 2026-04-05 01:00:32.407415 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 01:00:32.407421 | orchestrator | Sunday 05 April 2026 00:52:46 +0000 (0:00:01.159) 0:03:58.984 ********** 2026-04-05 01:00:32.407427 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.407433 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.407439 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.407446 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407452 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407458 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407464 | orchestrator | 2026-04-05 01:00:32.407470 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 01:00:32.407476 | orchestrator | Sunday 05 April 2026 00:52:47 +0000 (0:00:01.205) 0:04:00.190 ********** 2026-04-05 01:00:32.407483 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.407490 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.407496 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.407502 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407538 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407545 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407551 | orchestrator | 2026-04-05 01:00:32.407557 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 01:00:32.407563 | orchestrator | Sunday 05 April 2026 00:52:48 +0000 (0:00:01.502) 0:04:01.693 ********** 2026-04-05 01:00:32.407572 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-05 01:00:32.407581 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-05 01:00:32.407588 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.407595 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-05 01:00:32.407610 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-05 01:00:32.407617 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.407623 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-05 01:00:32.407629 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-05 01:00:32.407635 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.407641 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407647 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407657 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407664 | orchestrator | 2026-04-05 01:00:32.407670 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 01:00:32.407676 | orchestrator | Sunday 05 April 2026 00:52:49 +0000 (0:00:00.665) 0:04:02.358 ********** 2026-04-05 01:00:32.407682 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.407689 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.407695 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.407701 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407707 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407713 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407719 | orchestrator | 2026-04-05 01:00:32.407725 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 01:00:32.407731 | orchestrator | Sunday 05 April 2026 00:52:50 +0000 (0:00:00.989) 0:04:03.347 ********** 2026-04-05 01:00:32.407737 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.407743 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.407749 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.407755 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407761 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407767 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407774 | orchestrator | 2026-04-05 01:00:32.407780 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 01:00:32.407786 | orchestrator | Sunday 05 April 2026 00:52:51 +0000 (0:00:00.732) 0:04:04.079 ********** 2026-04-05 01:00:32.407792 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.407798 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.407804 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.407810 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407816 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407822 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407828 | orchestrator | 2026-04-05 01:00:32.407834 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 01:00:32.407840 | orchestrator | Sunday 05 April 2026 00:52:52 +0000 (0:00:01.046) 0:04:05.126 ********** 2026-04-05 01:00:32.407847 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.407853 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.407859 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.407865 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407875 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407881 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407888 | orchestrator | 2026-04-05 01:00:32.407894 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 01:00:32.407919 | orchestrator | Sunday 05 April 2026 00:52:53 +0000 (0:00:00.905) 0:04:06.031 ********** 2026-04-05 01:00:32.407926 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.407933 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.407939 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.407945 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.407951 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.407957 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.407963 | orchestrator | 2026-04-05 01:00:32.407991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 01:00:32.407997 | orchestrator | Sunday 05 April 2026 00:52:54 +0000 (0:00:01.248) 0:04:07.280 ********** 2026-04-05 01:00:32.408004 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.408010 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.408016 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.408022 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.408028 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.408035 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.408041 | orchestrator | 2026-04-05 01:00:32.408047 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 01:00:32.408053 | orchestrator | Sunday 05 April 2026 00:52:55 +0000 (0:00:00.928) 0:04:08.209 ********** 2026-04-05 01:00:32.408059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.408066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.408072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.408078 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408084 | orchestrator | 2026-04-05 01:00:32.408090 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 01:00:32.408096 | orchestrator | Sunday 05 April 2026 00:52:56 +0000 (0:00:00.831) 0:04:09.040 ********** 2026-04-05 01:00:32.408103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.408109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.408115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.408121 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408127 | orchestrator | 2026-04-05 01:00:32.408133 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 01:00:32.408139 | orchestrator | Sunday 05 April 2026 00:52:57 +0000 (0:00:00.794) 0:04:09.834 ********** 2026-04-05 01:00:32.408146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.408152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.408158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.408164 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408170 | orchestrator | 2026-04-05 01:00:32.408176 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 01:00:32.408183 | orchestrator | Sunday 05 April 2026 00:52:58 +0000 (0:00:01.102) 0:04:10.936 ********** 2026-04-05 01:00:32.408189 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.408195 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.408201 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.408207 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.408213 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.408219 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.408226 | orchestrator | 2026-04-05 01:00:32.408232 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 01:00:32.408243 | orchestrator | Sunday 05 April 2026 00:52:58 +0000 (0:00:00.780) 0:04:11.717 ********** 2026-04-05 01:00:32.408254 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 01:00:32.408261 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 01:00:32.408267 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-05 01:00:32.408273 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 01:00:32.408279 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.408285 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-05 01:00:32.408291 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.408297 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-05 01:00:32.408304 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.408310 | orchestrator | 2026-04-05 01:00:32.408316 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 01:00:32.408322 | orchestrator | Sunday 05 April 2026 00:53:01 +0000 (0:00:02.579) 0:04:14.296 ********** 2026-04-05 01:00:32.408328 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.408335 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.408341 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.408347 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.408353 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.408359 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.408365 | orchestrator | 2026-04-05 01:00:32.408371 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:32.408377 | orchestrator | Sunday 05 April 2026 00:53:05 +0000 (0:00:03.560) 0:04:17.856 ********** 2026-04-05 01:00:32.408383 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.408389 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.408396 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.408402 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.408408 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.408414 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.408420 | orchestrator | 2026-04-05 01:00:32.408426 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-05 01:00:32.408432 | orchestrator | Sunday 05 April 2026 00:53:07 +0000 (0:00:02.059) 0:04:19.915 ********** 2026-04-05 01:00:32.408438 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408444 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.408451 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.408457 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.408463 | orchestrator | 2026-04-05 01:00:32.408470 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-05 01:00:32.408497 | orchestrator | Sunday 05 April 2026 00:53:08 +0000 (0:00:01.102) 0:04:21.018 ********** 2026-04-05 01:00:32.408504 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.408510 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.408517 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.408523 | orchestrator | 2026-04-05 01:00:32.408529 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-05 01:00:32.408535 | orchestrator | Sunday 05 April 2026 00:53:08 +0000 (0:00:00.366) 0:04:21.384 ********** 2026-04-05 01:00:32.408542 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.408548 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.408554 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.408560 | orchestrator | 2026-04-05 01:00:32.408567 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-05 01:00:32.408573 | orchestrator | Sunday 05 April 2026 00:53:09 +0000 (0:00:01.329) 0:04:22.714 ********** 2026-04-05 01:00:32.408580 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 01:00:32.408586 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 01:00:32.408592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 01:00:32.408598 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.408604 | orchestrator | 2026-04-05 01:00:32.408616 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-05 01:00:32.408622 | orchestrator | Sunday 05 April 2026 00:53:11 +0000 (0:00:01.171) 0:04:23.886 ********** 2026-04-05 01:00:32.408629 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.408635 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.408641 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.408648 | orchestrator | 2026-04-05 01:00:32.408654 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-05 01:00:32.408660 | orchestrator | Sunday 05 April 2026 00:53:11 +0000 (0:00:00.334) 0:04:24.221 ********** 2026-04-05 01:00:32.408666 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.408673 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.408679 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.408686 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.408692 | orchestrator | 2026-04-05 01:00:32.408698 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-05 01:00:32.408704 | orchestrator | Sunday 05 April 2026 00:53:12 +0000 (0:00:01.298) 0:04:25.519 ********** 2026-04-05 01:00:32.408711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.408717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.408723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.408730 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408736 | orchestrator | 2026-04-05 01:00:32.408742 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-05 01:00:32.408748 | orchestrator | Sunday 05 April 2026 00:53:13 +0000 (0:00:00.407) 0:04:25.927 ********** 2026-04-05 01:00:32.408754 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408760 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.408767 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.408773 | orchestrator | 2026-04-05 01:00:32.408779 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-05 01:00:32.408785 | orchestrator | Sunday 05 April 2026 00:53:13 +0000 (0:00:00.701) 0:04:26.629 ********** 2026-04-05 01:00:32.408791 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408797 | orchestrator | 2026-04-05 01:00:32.408807 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-05 01:00:32.408814 | orchestrator | Sunday 05 April 2026 00:53:14 +0000 (0:00:00.258) 0:04:26.887 ********** 2026-04-05 01:00:32.408820 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408826 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.408832 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.408838 | orchestrator | 2026-04-05 01:00:32.408844 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-05 01:00:32.408851 | orchestrator | Sunday 05 April 2026 00:53:14 +0000 (0:00:00.349) 0:04:27.237 ********** 2026-04-05 01:00:32.408857 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408863 | orchestrator | 2026-04-05 01:00:32.408869 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-05 01:00:32.408875 | orchestrator | Sunday 05 April 2026 00:53:14 +0000 (0:00:00.236) 0:04:27.473 ********** 2026-04-05 01:00:32.408881 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408887 | orchestrator | 2026-04-05 01:00:32.408893 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-05 01:00:32.408900 | orchestrator | Sunday 05 April 2026 00:53:14 +0000 (0:00:00.216) 0:04:27.690 ********** 2026-04-05 01:00:32.408906 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408912 | orchestrator | 2026-04-05 01:00:32.408918 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-05 01:00:32.408924 | orchestrator | Sunday 05 April 2026 00:53:15 +0000 (0:00:00.136) 0:04:27.826 ********** 2026-04-05 01:00:32.408930 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408936 | orchestrator | 2026-04-05 01:00:32.408947 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-05 01:00:32.408953 | orchestrator | Sunday 05 April 2026 00:53:15 +0000 (0:00:00.217) 0:04:28.044 ********** 2026-04-05 01:00:32.408959 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.408966 | orchestrator | 2026-04-05 01:00:32.409015 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-05 01:00:32.409022 | orchestrator | Sunday 05 April 2026 00:53:15 +0000 (0:00:00.228) 0:04:28.272 ********** 2026-04-05 01:00:32.409028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.409034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.409040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.409046 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.409052 | orchestrator | 2026-04-05 01:00:32.409059 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-05 01:00:32.409088 | orchestrator | Sunday 05 April 2026 00:53:16 +0000 (0:00:00.709) 0:04:28.982 ********** 2026-04-05 01:00:32.409095 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.409101 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.409107 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.409114 | orchestrator | 2026-04-05 01:00:32.409120 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-05 01:00:32.409126 | orchestrator | Sunday 05 April 2026 00:53:16 +0000 (0:00:00.664) 0:04:29.646 ********** 2026-04-05 01:00:32.409132 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.409138 | orchestrator | 2026-04-05 01:00:32.409144 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-05 01:00:32.409151 | orchestrator | Sunday 05 April 2026 00:53:17 +0000 (0:00:00.253) 0:04:29.900 ********** 2026-04-05 01:00:32.409157 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.409163 | orchestrator | 2026-04-05 01:00:32.409169 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-05 01:00:32.409176 | orchestrator | Sunday 05 April 2026 00:53:17 +0000 (0:00:00.224) 0:04:30.124 ********** 2026-04-05 01:00:32.409182 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.409188 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.409194 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.409201 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.409207 | orchestrator | 2026-04-05 01:00:32.409213 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-05 01:00:32.409219 | orchestrator | Sunday 05 April 2026 00:53:18 +0000 (0:00:00.881) 0:04:31.006 ********** 2026-04-05 01:00:32.409228 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.409238 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.409249 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.409259 | orchestrator | 2026-04-05 01:00:32.409270 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-05 01:00:32.409281 | orchestrator | Sunday 05 April 2026 00:53:18 +0000 (0:00:00.622) 0:04:31.628 ********** 2026-04-05 01:00:32.409290 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.409298 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.409304 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.409309 | orchestrator | 2026-04-05 01:00:32.409314 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-05 01:00:32.409320 | orchestrator | Sunday 05 April 2026 00:53:20 +0000 (0:00:01.332) 0:04:32.961 ********** 2026-04-05 01:00:32.409325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.409331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.409336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.409342 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.409347 | orchestrator | 2026-04-05 01:00:32.409358 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-05 01:00:32.409363 | orchestrator | Sunday 05 April 2026 00:53:20 +0000 (0:00:00.630) 0:04:33.591 ********** 2026-04-05 01:00:32.409369 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.409374 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.409380 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.409385 | orchestrator | 2026-04-05 01:00:32.409390 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-05 01:00:32.409396 | orchestrator | Sunday 05 April 2026 00:53:21 +0000 (0:00:00.351) 0:04:33.943 ********** 2026-04-05 01:00:32.409406 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.409411 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.409417 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.409422 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.409428 | orchestrator | 2026-04-05 01:00:32.409446 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-05 01:00:32.409451 | orchestrator | Sunday 05 April 2026 00:53:22 +0000 (0:00:01.187) 0:04:35.130 ********** 2026-04-05 01:00:32.409457 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.409462 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.409467 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.409473 | orchestrator | 2026-04-05 01:00:32.409478 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-05 01:00:32.409484 | orchestrator | Sunday 05 April 2026 00:53:22 +0000 (0:00:00.370) 0:04:35.501 ********** 2026-04-05 01:00:32.409489 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.409495 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.409500 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.409505 | orchestrator | 2026-04-05 01:00:32.409510 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-05 01:00:32.409516 | orchestrator | Sunday 05 April 2026 00:53:24 +0000 (0:00:01.635) 0:04:37.137 ********** 2026-04-05 01:00:32.409521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.409526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.409532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.409537 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.409543 | orchestrator | 2026-04-05 01:00:32.409548 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-05 01:00:32.409553 | orchestrator | Sunday 05 April 2026 00:53:25 +0000 (0:00:00.710) 0:04:37.848 ********** 2026-04-05 01:00:32.409559 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.409564 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.409569 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.409575 | orchestrator | 2026-04-05 01:00:32.409580 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-05 01:00:32.409585 | orchestrator | Sunday 05 April 2026 00:53:25 +0000 (0:00:00.439) 0:04:38.287 ********** 2026-04-05 01:00:32.409591 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.409596 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.409601 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.409607 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.409612 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.409640 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.409646 | orchestrator | 2026-04-05 01:00:32.409651 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-05 01:00:32.409657 | orchestrator | Sunday 05 April 2026 00:53:26 +0000 (0:00:00.877) 0:04:39.165 ********** 2026-04-05 01:00:32.409662 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.409668 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.409673 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.409678 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.409689 | orchestrator | 2026-04-05 01:00:32.409694 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-05 01:00:32.409700 | orchestrator | Sunday 05 April 2026 00:53:27 +0000 (0:00:01.454) 0:04:40.620 ********** 2026-04-05 01:00:32.409705 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.409710 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.409716 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.409721 | orchestrator | 2026-04-05 01:00:32.409727 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-05 01:00:32.409732 | orchestrator | Sunday 05 April 2026 00:53:28 +0000 (0:00:00.467) 0:04:41.087 ********** 2026-04-05 01:00:32.409738 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.409743 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.409748 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.409754 | orchestrator | 2026-04-05 01:00:32.409759 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-05 01:00:32.409765 | orchestrator | Sunday 05 April 2026 00:53:30 +0000 (0:00:01.812) 0:04:42.899 ********** 2026-04-05 01:00:32.409770 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 01:00:32.409776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 01:00:32.409785 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 01:00:32.409794 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.409802 | orchestrator | 2026-04-05 01:00:32.409819 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-05 01:00:32.409829 | orchestrator | Sunday 05 April 2026 00:53:30 +0000 (0:00:00.738) 0:04:43.638 ********** 2026-04-05 01:00:32.409837 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.409846 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.409854 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.409863 | orchestrator | 2026-04-05 01:00:32.409871 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-05 01:00:32.409879 | orchestrator | 2026-04-05 01:00:32.409886 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:32.409894 | orchestrator | Sunday 05 April 2026 00:53:31 +0000 (0:00:00.626) 0:04:44.264 ********** 2026-04-05 01:00:32.409901 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-04-05 01:00:32.409909 | orchestrator | 2026-04-05 01:00:32.409916 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:32.409924 | orchestrator | Sunday 05 April 2026 00:53:32 +0000 (0:00:00.952) 0:04:45.217 ********** 2026-04-05 01:00:32.409932 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.409940 | orchestrator | 2026-04-05 01:00:32.409955 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:32.409964 | orchestrator | Sunday 05 April 2026 00:53:33 +0000 (0:00:00.615) 0:04:45.833 ********** 2026-04-05 01:00:32.409991 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.409999 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.410007 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.410051 | orchestrator | 2026-04-05 01:00:32.410058 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:32.410063 | orchestrator | Sunday 05 April 2026 00:53:34 +0000 (0:00:00.999) 0:04:46.833 ********** 2026-04-05 01:00:32.410069 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.410074 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.410079 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.410085 | orchestrator | 2026-04-05 01:00:32.410090 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:32.410095 | orchestrator | Sunday 05 April 2026 00:53:34 +0000 (0:00:00.617) 0:04:47.450 ********** 2026-04-05 01:00:32.410101 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.410112 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.410118 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.410123 | orchestrator | 2026-04-05 01:00:32.410129 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:32.410134 | orchestrator | Sunday 05 April 2026 00:53:35 +0000 (0:00:00.304) 0:04:47.754 ********** 2026-04-05 01:00:32.410141 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.410150 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.410163 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.410174 | orchestrator | 2026-04-05 01:00:32.410183 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:32.410191 | orchestrator | Sunday 05 April 2026 00:53:35 +0000 (0:00:00.326) 0:04:48.081 ********** 2026-04-05 01:00:32.410200 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.410208 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.410216 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.410225 | orchestrator | 2026-04-05 01:00:32.410233 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:32.410241 | orchestrator | Sunday 05 April 2026 00:53:36 +0000 (0:00:00.735) 0:04:48.816 ********** 2026-04-05 01:00:32.410249 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.410258 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.410265 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.410274 | orchestrator | 2026-04-05 01:00:32.410283 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:32.410292 | orchestrator | Sunday 05 April 2026 00:53:36 +0000 (0:00:00.370) 0:04:49.186 ********** 2026-04-05 01:00:32.410340 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.410348 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.410354 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.410359 | orchestrator | 2026-04-05 01:00:32.410365 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:32.410370 | orchestrator | Sunday 05 April 2026 00:53:37 +0000 (0:00:00.704) 0:04:49.891 ********** 2026-04-05 01:00:32.410375 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.410381 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.410387 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.410392 | orchestrator | 2026-04-05 01:00:32.410398 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:32.410403 | orchestrator | Sunday 05 April 2026 00:53:38 +0000 (0:00:00.847) 0:04:50.738 ********** 2026-04-05 01:00:32.410409 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.410419 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.410427 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.410437 | orchestrator | 2026-04-05 01:00:32.410447 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:32.410455 | orchestrator | Sunday 05 April 2026 00:53:38 +0000 (0:00:00.899) 0:04:51.637 ********** 2026-04-05 01:00:32.410463 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.410473 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.410482 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.410491 | orchestrator | 2026-04-05 01:00:32.410499 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:32.410507 | orchestrator | Sunday 05 April 2026 00:53:39 +0000 (0:00:00.474) 0:04:52.111 ********** 2026-04-05 01:00:32.410516 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.410525 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.410534 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.410543 | orchestrator | 2026-04-05 01:00:32.410553 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:32.410561 | orchestrator | Sunday 05 April 2026 00:53:40 +0000 (0:00:01.422) 0:04:53.534 ********** 2026-04-05 01:00:32.410571 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.410580 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.410600 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.410609 | orchestrator | 2026-04-05 01:00:32.410618 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:32.410627 | orchestrator | Sunday 05 April 2026 00:53:41 +0000 (0:00:00.656) 0:04:54.191 ********** 2026-04-05 01:00:32.410635 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.410645 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.410654 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.410664 | orchestrator | 2026-04-05 01:00:32.410675 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:32.410683 | orchestrator | Sunday 05 April 2026 00:53:41 +0000 (0:00:00.416) 0:04:54.607 ********** 2026-04-05 01:00:32.410692 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.410702 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.410712 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.410722 | orchestrator | 2026-04-05 01:00:32.410732 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:32.410741 | orchestrator | Sunday 05 April 2026 00:53:42 +0000 (0:00:00.339) 0:04:54.948 ********** 2026-04-05 01:00:32.410751 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.410761 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.410772 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.410781 | orchestrator | 2026-04-05 01:00:32.410791 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:32.410807 | orchestrator | Sunday 05 April 2026 00:53:42 +0000 (0:00:00.615) 0:04:55.563 ********** 2026-04-05 01:00:32.410812 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.410818 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.410823 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.410828 | orchestrator | 2026-04-05 01:00:32.410834 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:32.410839 | orchestrator | Sunday 05 April 2026 00:53:43 +0000 (0:00:00.455) 0:04:56.018 ********** 2026-04-05 01:00:32.410845 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.410850 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.410855 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.410861 | orchestrator | 2026-04-05 01:00:32.410866 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:32.410871 | orchestrator | Sunday 05 April 2026 00:53:43 +0000 (0:00:00.343) 0:04:56.362 ********** 2026-04-05 01:00:32.410877 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.410882 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.410888 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.410893 | orchestrator | 2026-04-05 01:00:32.410898 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:32.410904 | orchestrator | Sunday 05 April 2026 00:53:43 +0000 (0:00:00.345) 0:04:56.708 ********** 2026-04-05 01:00:32.410909 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.410914 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.410920 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.410925 | orchestrator | 2026-04-05 01:00:32.410930 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-05 01:00:32.410936 | orchestrator | Sunday 05 April 2026 00:53:44 +0000 (0:00:00.747) 0:04:57.455 ********** 2026-04-05 01:00:32.410941 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.410946 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.410952 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.410957 | orchestrator | 2026-04-05 01:00:32.410962 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-05 01:00:32.411014 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:00.383) 0:04:57.839 ********** 2026-04-05 01:00:32.411021 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.411027 | orchestrator | 2026-04-05 01:00:32.411033 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-05 01:00:32.411048 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:00.574) 0:04:58.413 ********** 2026-04-05 01:00:32.411054 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.411059 | orchestrator | 2026-04-05 01:00:32.411101 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-05 01:00:32.411107 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:00.293) 0:04:58.706 ********** 2026-04-05 01:00:32.411113 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 01:00:32.411118 | orchestrator | 2026-04-05 01:00:32.411124 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-05 01:00:32.411129 | orchestrator | Sunday 05 April 2026 00:53:47 +0000 (0:00:01.262) 0:04:59.969 ********** 2026-04-05 01:00:32.411134 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.411140 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.411145 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.411150 | orchestrator | 2026-04-05 01:00:32.411156 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-05 01:00:32.411161 | orchestrator | Sunday 05 April 2026 00:53:47 +0000 (0:00:00.395) 0:05:00.364 ********** 2026-04-05 01:00:32.411167 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.411172 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.411178 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.411183 | orchestrator | 2026-04-05 01:00:32.411189 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-05 01:00:32.411194 | orchestrator | Sunday 05 April 2026 00:53:48 +0000 (0:00:00.448) 0:05:00.813 ********** 2026-04-05 01:00:32.411200 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.411205 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.411211 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.411216 | orchestrator | 2026-04-05 01:00:32.411221 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-05 01:00:32.411227 | orchestrator | Sunday 05 April 2026 00:53:49 +0000 (0:00:01.535) 0:05:02.349 ********** 2026-04-05 01:00:32.411232 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.411238 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.411243 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.411248 | orchestrator | 2026-04-05 01:00:32.411254 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-05 01:00:32.411259 | orchestrator | Sunday 05 April 2026 00:53:50 +0000 (0:00:01.345) 0:05:03.694 ********** 2026-04-05 01:00:32.411264 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.411270 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.411275 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.411281 | orchestrator | 2026-04-05 01:00:32.411286 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-05 01:00:32.411291 | orchestrator | Sunday 05 April 2026 00:53:52 +0000 (0:00:01.090) 0:05:04.785 ********** 2026-04-05 01:00:32.411297 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.411302 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.411308 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.411313 | orchestrator | 2026-04-05 01:00:32.411318 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-05 01:00:32.411324 | orchestrator | Sunday 05 April 2026 00:53:53 +0000 (0:00:01.262) 0:05:06.047 ********** 2026-04-05 01:00:32.411329 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.411335 | orchestrator | 2026-04-05 01:00:32.411340 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-05 01:00:32.411346 | orchestrator | Sunday 05 April 2026 00:53:54 +0000 (0:00:01.399) 0:05:07.447 ********** 2026-04-05 01:00:32.411351 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.411357 | orchestrator | 2026-04-05 01:00:32.411362 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-05 01:00:32.411367 | orchestrator | Sunday 05 April 2026 00:53:55 +0000 (0:00:00.823) 0:05:08.270 ********** 2026-04-05 01:00:32.411377 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:00:32.411387 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.411392 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.411398 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:00:32.411404 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-05 01:00:32.411409 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:00:32.411415 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:00:32.411420 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-05 01:00:32.411425 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-05 01:00:32.411431 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-05 01:00:32.411436 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:00:32.411442 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-05 01:00:32.411447 | orchestrator | 2026-04-05 01:00:32.411452 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-05 01:00:32.411458 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:04.374) 0:05:12.644 ********** 2026-04-05 01:00:32.411463 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.411469 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.411474 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.411479 | orchestrator | 2026-04-05 01:00:32.411485 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-05 01:00:32.411490 | orchestrator | Sunday 05 April 2026 00:54:01 +0000 (0:00:01.992) 0:05:14.637 ********** 2026-04-05 01:00:32.411496 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.411501 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.411506 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.411512 | orchestrator | 2026-04-05 01:00:32.411517 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-05 01:00:32.411523 | orchestrator | Sunday 05 April 2026 00:54:02 +0000 (0:00:00.404) 0:05:15.041 ********** 2026-04-05 01:00:32.411528 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.411533 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.411539 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.411544 | orchestrator | 2026-04-05 01:00:32.411549 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-05 01:00:32.411553 | orchestrator | Sunday 05 April 2026 00:54:02 +0000 (0:00:00.298) 0:05:15.340 ********** 2026-04-05 01:00:32.411558 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.411579 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.411584 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.411589 | orchestrator | 2026-04-05 01:00:32.411594 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-05 01:00:32.411599 | orchestrator | Sunday 05 April 2026 00:54:05 +0000 (0:00:02.463) 0:05:17.803 ********** 2026-04-05 01:00:32.411604 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.411609 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.411613 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.411618 | orchestrator | 2026-04-05 01:00:32.411623 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-05 01:00:32.411628 | orchestrator | Sunday 05 April 2026 00:54:06 +0000 (0:00:01.932) 0:05:19.736 ********** 2026-04-05 01:00:32.411632 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.411637 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.411642 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.411647 | orchestrator | 2026-04-05 01:00:32.411651 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-05 01:00:32.411656 | orchestrator | Sunday 05 April 2026 00:54:08 +0000 (0:00:01.037) 0:05:20.774 ********** 2026-04-05 01:00:32.411661 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.411670 | orchestrator | 2026-04-05 01:00:32.411675 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-05 01:00:32.411680 | orchestrator | Sunday 05 April 2026 00:54:09 +0000 (0:00:00.969) 0:05:21.743 ********** 2026-04-05 01:00:32.411685 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.411690 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.411694 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.411699 | orchestrator | 2026-04-05 01:00:32.411704 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-05 01:00:32.411709 | orchestrator | Sunday 05 April 2026 00:54:09 +0000 (0:00:00.709) 0:05:22.453 ********** 2026-04-05 01:00:32.411713 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.411718 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.411723 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.411728 | orchestrator | 2026-04-05 01:00:32.411732 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-05 01:00:32.411737 | orchestrator | Sunday 05 April 2026 00:54:10 +0000 (0:00:00.391) 0:05:22.845 ********** 2026-04-05 01:00:32.411743 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.411752 | orchestrator | 2026-04-05 01:00:32.411760 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-05 01:00:32.411768 | orchestrator | Sunday 05 April 2026 00:54:11 +0000 (0:00:01.016) 0:05:23.861 ********** 2026-04-05 01:00:32.411776 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.411783 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.411791 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.411798 | orchestrator | 2026-04-05 01:00:32.411806 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-05 01:00:32.411814 | orchestrator | Sunday 05 April 2026 00:54:14 +0000 (0:00:03.177) 0:05:27.039 ********** 2026-04-05 01:00:32.411822 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.411829 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.411837 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.411845 | orchestrator | 2026-04-05 01:00:32.411857 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-05 01:00:32.411866 | orchestrator | Sunday 05 April 2026 00:54:15 +0000 (0:00:01.569) 0:05:28.609 ********** 2026-04-05 01:00:32.411873 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.411882 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.411887 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.411892 | orchestrator | 2026-04-05 01:00:32.411897 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-05 01:00:32.411901 | orchestrator | Sunday 05 April 2026 00:54:17 +0000 (0:00:02.009) 0:05:30.618 ********** 2026-04-05 01:00:32.411906 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.411911 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.411916 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.411920 | orchestrator | 2026-04-05 01:00:32.411925 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-05 01:00:32.411930 | orchestrator | Sunday 05 April 2026 00:54:19 +0000 (0:00:02.104) 0:05:32.723 ********** 2026-04-05 01:00:32.411935 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.411939 | orchestrator | 2026-04-05 01:00:32.411944 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-05 01:00:32.411949 | orchestrator | Sunday 05 April 2026 00:54:20 +0000 (0:00:00.885) 0:05:33.609 ********** 2026-04-05 01:00:32.411954 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-05 01:00:32.411959 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.411963 | orchestrator | 2026-04-05 01:00:32.411985 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-05 01:00:32.411996 | orchestrator | Sunday 05 April 2026 00:54:42 +0000 (0:00:21.493) 0:05:55.103 ********** 2026-04-05 01:00:32.412001 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.412005 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.412010 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.412015 | orchestrator | 2026-04-05 01:00:32.412020 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-05 01:00:32.412025 | orchestrator | Sunday 05 April 2026 00:54:48 +0000 (0:00:06.366) 0:06:01.469 ********** 2026-04-05 01:00:32.412029 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412034 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412039 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412044 | orchestrator | 2026-04-05 01:00:32.412049 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-05 01:00:32.412073 | orchestrator | Sunday 05 April 2026 00:54:49 +0000 (0:00:00.275) 0:06:01.745 ********** 2026-04-05 01:00:32.412080 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1518d291ff68b2dffb19fafa473f3f260088120'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-05 01:00:32.412088 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1518d291ff68b2dffb19fafa473f3f260088120'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-05 01:00:32.412094 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1518d291ff68b2dffb19fafa473f3f260088120'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-05 01:00:32.412101 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1518d291ff68b2dffb19fafa473f3f260088120'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-05 01:00:32.412106 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1518d291ff68b2dffb19fafa473f3f260088120'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-05 01:00:32.412116 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1518d291ff68b2dffb19fafa473f3f260088120'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__c1518d291ff68b2dffb19fafa473f3f260088120'}])  2026-04-05 01:00:32.412123 | orchestrator | 2026-04-05 01:00:32.412128 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:32.412133 | orchestrator | Sunday 05 April 2026 00:54:59 +0000 (0:00:10.822) 0:06:12.567 ********** 2026-04-05 01:00:32.412137 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412142 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412147 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412152 | orchestrator | 2026-04-05 01:00:32.412161 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-05 01:00:32.412175 | orchestrator | Sunday 05 April 2026 00:55:00 +0000 (0:00:00.395) 0:06:12.963 ********** 2026-04-05 01:00:32.412184 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.412192 | orchestrator | 2026-04-05 01:00:32.412200 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-05 01:00:32.412209 | orchestrator | Sunday 05 April 2026 00:55:01 +0000 (0:00:00.988) 0:06:13.951 ********** 2026-04-05 01:00:32.412219 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.412227 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.412236 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.412245 | orchestrator | 2026-04-05 01:00:32.412252 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-05 01:00:32.412261 | orchestrator | Sunday 05 April 2026 00:55:01 +0000 (0:00:00.319) 0:06:14.271 ********** 2026-04-05 01:00:32.412269 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412278 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412284 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412288 | orchestrator | 2026-04-05 01:00:32.412293 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-05 01:00:32.412298 | orchestrator | Sunday 05 April 2026 00:55:01 +0000 (0:00:00.424) 0:06:14.696 ********** 2026-04-05 01:00:32.412303 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 01:00:32.412308 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 01:00:32.412313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 01:00:32.412318 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412322 | orchestrator | 2026-04-05 01:00:32.412327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-05 01:00:32.412332 | orchestrator | Sunday 05 April 2026 00:55:02 +0000 (0:00:00.720) 0:06:15.416 ********** 2026-04-05 01:00:32.412336 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.412341 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.412364 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.412370 | orchestrator | 2026-04-05 01:00:32.412375 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-05 01:00:32.412380 | orchestrator | 2026-04-05 01:00:32.412385 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:32.412389 | orchestrator | Sunday 05 April 2026 00:55:03 +0000 (0:00:00.860) 0:06:16.277 ********** 2026-04-05 01:00:32.412394 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.412400 | orchestrator | 2026-04-05 01:00:32.412405 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:32.412409 | orchestrator | Sunday 05 April 2026 00:55:04 +0000 (0:00:00.519) 0:06:16.796 ********** 2026-04-05 01:00:32.412414 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-04-05 01:00:32.412419 | orchestrator | 2026-04-05 01:00:32.412424 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:32.412429 | orchestrator | Sunday 05 April 2026 00:55:04 +0000 (0:00:00.843) 0:06:17.640 ********** 2026-04-05 01:00:32.412434 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.412438 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.412443 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.412448 | orchestrator | 2026-04-05 01:00:32.412453 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:32.412458 | orchestrator | Sunday 05 April 2026 00:55:05 +0000 (0:00:00.829) 0:06:18.470 ********** 2026-04-05 01:00:32.412463 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412467 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412472 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412481 | orchestrator | 2026-04-05 01:00:32.412486 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:32.412491 | orchestrator | Sunday 05 April 2026 00:55:06 +0000 (0:00:00.486) 0:06:18.956 ********** 2026-04-05 01:00:32.412496 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412501 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412505 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412510 | orchestrator | 2026-04-05 01:00:32.412515 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:32.412520 | orchestrator | Sunday 05 April 2026 00:55:06 +0000 (0:00:00.361) 0:06:19.318 ********** 2026-04-05 01:00:32.412525 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412529 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412534 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412539 | orchestrator | 2026-04-05 01:00:32.412544 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:32.412548 | orchestrator | Sunday 05 April 2026 00:55:06 +0000 (0:00:00.293) 0:06:19.611 ********** 2026-04-05 01:00:32.412553 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.412558 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.412563 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.412568 | orchestrator | 2026-04-05 01:00:32.412573 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:32.412577 | orchestrator | Sunday 05 April 2026 00:55:08 +0000 (0:00:01.174) 0:06:20.785 ********** 2026-04-05 01:00:32.412582 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412587 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412592 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412596 | orchestrator | 2026-04-05 01:00:32.412605 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:32.412610 | orchestrator | Sunday 05 April 2026 00:55:08 +0000 (0:00:00.328) 0:06:21.114 ********** 2026-04-05 01:00:32.412615 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412620 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412624 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412629 | orchestrator | 2026-04-05 01:00:32.412634 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:32.412639 | orchestrator | Sunday 05 April 2026 00:55:08 +0000 (0:00:00.304) 0:06:21.418 ********** 2026-04-05 01:00:32.412644 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.412648 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.412653 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.412658 | orchestrator | 2026-04-05 01:00:32.412663 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:32.412668 | orchestrator | Sunday 05 April 2026 00:55:09 +0000 (0:00:00.777) 0:06:22.196 ********** 2026-04-05 01:00:32.412672 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.412677 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.412682 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.412687 | orchestrator | 2026-04-05 01:00:32.412691 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:32.412696 | orchestrator | Sunday 05 April 2026 00:55:10 +0000 (0:00:01.022) 0:06:23.219 ********** 2026-04-05 01:00:32.412701 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412706 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412711 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412715 | orchestrator | 2026-04-05 01:00:32.412720 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:32.412725 | orchestrator | Sunday 05 April 2026 00:55:10 +0000 (0:00:00.293) 0:06:23.512 ********** 2026-04-05 01:00:32.412730 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.412735 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.412739 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.412744 | orchestrator | 2026-04-05 01:00:32.412749 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:32.412758 | orchestrator | Sunday 05 April 2026 00:55:11 +0000 (0:00:00.342) 0:06:23.854 ********** 2026-04-05 01:00:32.412762 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412767 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412772 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412777 | orchestrator | 2026-04-05 01:00:32.412782 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:32.412801 | orchestrator | Sunday 05 April 2026 00:55:11 +0000 (0:00:00.273) 0:06:24.127 ********** 2026-04-05 01:00:32.412807 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412812 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412817 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412822 | orchestrator | 2026-04-05 01:00:32.412826 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:32.412831 | orchestrator | Sunday 05 April 2026 00:55:11 +0000 (0:00:00.450) 0:06:24.577 ********** 2026-04-05 01:00:32.412836 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412841 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412846 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412851 | orchestrator | 2026-04-05 01:00:32.412859 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:32.412867 | orchestrator | Sunday 05 April 2026 00:55:12 +0000 (0:00:00.347) 0:06:24.925 ********** 2026-04-05 01:00:32.412875 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412883 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412890 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412898 | orchestrator | 2026-04-05 01:00:32.412905 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:32.412913 | orchestrator | Sunday 05 April 2026 00:55:12 +0000 (0:00:00.374) 0:06:25.299 ********** 2026-04-05 01:00:32.412919 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.412926 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.412933 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.412940 | orchestrator | 2026-04-05 01:00:32.412949 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:32.412957 | orchestrator | Sunday 05 April 2026 00:55:12 +0000 (0:00:00.329) 0:06:25.629 ********** 2026-04-05 01:00:32.412965 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.412989 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.412997 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.413005 | orchestrator | 2026-04-05 01:00:32.413014 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:32.413021 | orchestrator | Sunday 05 April 2026 00:55:13 +0000 (0:00:00.630) 0:06:26.260 ********** 2026-04-05 01:00:32.413029 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.413037 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.413042 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.413047 | orchestrator | 2026-04-05 01:00:32.413052 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:32.413057 | orchestrator | Sunday 05 April 2026 00:55:13 +0000 (0:00:00.344) 0:06:26.605 ********** 2026-04-05 01:00:32.413061 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.413066 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.413071 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.413075 | orchestrator | 2026-04-05 01:00:32.413080 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-05 01:00:32.413085 | orchestrator | Sunday 05 April 2026 00:55:14 +0000 (0:00:00.583) 0:06:27.189 ********** 2026-04-05 01:00:32.413090 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 01:00:32.413095 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:32.413100 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:32.413104 | orchestrator | 2026-04-05 01:00:32.413109 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-05 01:00:32.413120 | orchestrator | Sunday 05 April 2026 00:55:15 +0000 (0:00:00.982) 0:06:28.171 ********** 2026-04-05 01:00:32.413128 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.413133 | orchestrator | 2026-04-05 01:00:32.413138 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-05 01:00:32.413143 | orchestrator | Sunday 05 April 2026 00:55:16 +0000 (0:00:01.075) 0:06:29.247 ********** 2026-04-05 01:00:32.413147 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.413152 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.413157 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.413162 | orchestrator | 2026-04-05 01:00:32.413166 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-05 01:00:32.413171 | orchestrator | Sunday 05 April 2026 00:55:17 +0000 (0:00:01.112) 0:06:30.359 ********** 2026-04-05 01:00:32.413176 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.413181 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.413186 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.413190 | orchestrator | 2026-04-05 01:00:32.413195 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-05 01:00:32.413200 | orchestrator | Sunday 05 April 2026 00:55:18 +0000 (0:00:00.402) 0:06:30.762 ********** 2026-04-05 01:00:32.413205 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:00:32.413210 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:00:32.413214 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:00:32.413219 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-05 01:00:32.413225 | orchestrator | 2026-04-05 01:00:32.413233 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-05 01:00:32.413241 | orchestrator | Sunday 05 April 2026 00:55:27 +0000 (0:00:09.266) 0:06:40.029 ********** 2026-04-05 01:00:32.413249 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.413258 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.413265 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.413272 | orchestrator | 2026-04-05 01:00:32.413280 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-05 01:00:32.413287 | orchestrator | Sunday 05 April 2026 00:55:27 +0000 (0:00:00.521) 0:06:40.551 ********** 2026-04-05 01:00:32.413295 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 01:00:32.413301 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 01:00:32.413308 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 01:00:32.413316 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 01:00:32.413324 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.413373 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.413380 | orchestrator | 2026-04-05 01:00:32.413385 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-05 01:00:32.413390 | orchestrator | Sunday 05 April 2026 00:55:29 +0000 (0:00:01.891) 0:06:42.442 ********** 2026-04-05 01:00:32.413394 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 01:00:32.413399 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 01:00:32.413404 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 01:00:32.413409 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:00:32.413414 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-05 01:00:32.413419 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-05 01:00:32.413424 | orchestrator | 2026-04-05 01:00:32.413429 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-05 01:00:32.413434 | orchestrator | Sunday 05 April 2026 00:55:30 +0000 (0:00:01.242) 0:06:43.685 ********** 2026-04-05 01:00:32.413438 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.413451 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.413456 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.413461 | orchestrator | 2026-04-05 01:00:32.413465 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-05 01:00:32.413470 | orchestrator | Sunday 05 April 2026 00:55:31 +0000 (0:00:00.730) 0:06:44.415 ********** 2026-04-05 01:00:32.413475 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.413480 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.413485 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.413489 | orchestrator | 2026-04-05 01:00:32.413494 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-05 01:00:32.413499 | orchestrator | Sunday 05 April 2026 00:55:32 +0000 (0:00:00.626) 0:06:45.042 ********** 2026-04-05 01:00:32.413504 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.413509 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.413514 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.413518 | orchestrator | 2026-04-05 01:00:32.413523 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-05 01:00:32.413528 | orchestrator | Sunday 05 April 2026 00:55:32 +0000 (0:00:00.384) 0:06:45.427 ********** 2026-04-05 01:00:32.413533 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.413538 | orchestrator | 2026-04-05 01:00:32.413543 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-05 01:00:32.413547 | orchestrator | Sunday 05 April 2026 00:55:33 +0000 (0:00:00.469) 0:06:45.896 ********** 2026-04-05 01:00:32.413552 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.413557 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.413562 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.413567 | orchestrator | 2026-04-05 01:00:32.413572 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-05 01:00:32.413576 | orchestrator | Sunday 05 April 2026 00:55:33 +0000 (0:00:00.270) 0:06:46.167 ********** 2026-04-05 01:00:32.413581 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.413586 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.413591 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.413596 | orchestrator | 2026-04-05 01:00:32.413600 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-05 01:00:32.413605 | orchestrator | Sunday 05 April 2026 00:55:33 +0000 (0:00:00.441) 0:06:46.608 ********** 2026-04-05 01:00:32.413614 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.413619 | orchestrator | 2026-04-05 01:00:32.413623 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-05 01:00:32.413628 | orchestrator | Sunday 05 April 2026 00:55:34 +0000 (0:00:00.503) 0:06:47.111 ********** 2026-04-05 01:00:32.413633 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.413638 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.413643 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.413647 | orchestrator | 2026-04-05 01:00:32.413652 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-05 01:00:32.413657 | orchestrator | Sunday 05 April 2026 00:55:35 +0000 (0:00:01.376) 0:06:48.487 ********** 2026-04-05 01:00:32.413662 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.413667 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.413672 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.413676 | orchestrator | 2026-04-05 01:00:32.413681 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-05 01:00:32.413686 | orchestrator | Sunday 05 April 2026 00:55:37 +0000 (0:00:01.396) 0:06:49.884 ********** 2026-04-05 01:00:32.413691 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.413696 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.413701 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.413705 | orchestrator | 2026-04-05 01:00:32.413710 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-05 01:00:32.413728 | orchestrator | Sunday 05 April 2026 00:55:40 +0000 (0:00:02.862) 0:06:52.746 ********** 2026-04-05 01:00:32.413736 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.413745 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.413754 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.413762 | orchestrator | 2026-04-05 01:00:32.413770 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-05 01:00:32.413778 | orchestrator | Sunday 05 April 2026 00:55:42 +0000 (0:00:02.442) 0:06:55.189 ********** 2026-04-05 01:00:32.413786 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.413795 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.413803 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-05 01:00:32.413811 | orchestrator | 2026-04-05 01:00:32.413819 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-05 01:00:32.413826 | orchestrator | Sunday 05 April 2026 00:55:42 +0000 (0:00:00.416) 0:06:55.605 ********** 2026-04-05 01:00:32.413860 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-05 01:00:32.413868 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-05 01:00:32.413875 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:32.413882 | orchestrator | 2026-04-05 01:00:32.413889 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-05 01:00:32.413896 | orchestrator | Sunday 05 April 2026 00:55:56 +0000 (0:00:13.651) 0:07:09.257 ********** 2026-04-05 01:00:32.413904 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:32.413911 | orchestrator | 2026-04-05 01:00:32.413917 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-05 01:00:32.413925 | orchestrator | Sunday 05 April 2026 00:55:57 +0000 (0:00:01.344) 0:07:10.602 ********** 2026-04-05 01:00:32.413933 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.413940 | orchestrator | 2026-04-05 01:00:32.413947 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-05 01:00:32.413955 | orchestrator | Sunday 05 April 2026 00:55:58 +0000 (0:00:00.400) 0:07:11.003 ********** 2026-04-05 01:00:32.413962 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.413992 | orchestrator | 2026-04-05 01:00:32.414000 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-05 01:00:32.414008 | orchestrator | Sunday 05 April 2026 00:55:58 +0000 (0:00:00.187) 0:07:11.191 ********** 2026-04-05 01:00:32.414044 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-05 01:00:32.414053 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-05 01:00:32.414061 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-05 01:00:32.414070 | orchestrator | 2026-04-05 01:00:32.414079 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-05 01:00:32.414087 | orchestrator | Sunday 05 April 2026 00:56:04 +0000 (0:00:06.135) 0:07:17.326 ********** 2026-04-05 01:00:32.414096 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-05 01:00:32.414104 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-05 01:00:32.414113 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-05 01:00:32.414121 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-05 01:00:32.414130 | orchestrator | 2026-04-05 01:00:32.414138 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:32.414147 | orchestrator | Sunday 05 April 2026 00:56:09 +0000 (0:00:04.553) 0:07:21.880 ********** 2026-04-05 01:00:32.414156 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.414165 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.414184 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.414192 | orchestrator | 2026-04-05 01:00:32.414201 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-05 01:00:32.414209 | orchestrator | Sunday 05 April 2026 00:56:09 +0000 (0:00:00.838) 0:07:22.719 ********** 2026-04-05 01:00:32.414217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.414226 | orchestrator | 2026-04-05 01:00:32.414240 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-05 01:00:32.414249 | orchestrator | Sunday 05 April 2026 00:56:10 +0000 (0:00:00.501) 0:07:23.220 ********** 2026-04-05 01:00:32.414258 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.414267 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.414275 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.414284 | orchestrator | 2026-04-05 01:00:32.414292 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-05 01:00:32.414300 | orchestrator | Sunday 05 April 2026 00:56:10 +0000 (0:00:00.311) 0:07:23.532 ********** 2026-04-05 01:00:32.414309 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.414318 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.414327 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.414336 | orchestrator | 2026-04-05 01:00:32.414344 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-05 01:00:32.414353 | orchestrator | Sunday 05 April 2026 00:56:12 +0000 (0:00:01.444) 0:07:24.976 ********** 2026-04-05 01:00:32.414362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 01:00:32.414371 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 01:00:32.414381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 01:00:32.414390 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.414399 | orchestrator | 2026-04-05 01:00:32.414408 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-05 01:00:32.414417 | orchestrator | Sunday 05 April 2026 00:56:12 +0000 (0:00:00.623) 0:07:25.600 ********** 2026-04-05 01:00:32.414426 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.414435 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.414443 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.414451 | orchestrator | 2026-04-05 01:00:32.414458 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-05 01:00:32.414467 | orchestrator | 2026-04-05 01:00:32.414475 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:32.414483 | orchestrator | Sunday 05 April 2026 00:56:13 +0000 (0:00:00.562) 0:07:26.162 ********** 2026-04-05 01:00:32.414492 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.414501 | orchestrator | 2026-04-05 01:00:32.414509 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:32.414517 | orchestrator | Sunday 05 April 2026 00:56:14 +0000 (0:00:00.764) 0:07:26.927 ********** 2026-04-05 01:00:32.414574 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.414580 | orchestrator | 2026-04-05 01:00:32.414585 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:32.414590 | orchestrator | Sunday 05 April 2026 00:56:14 +0000 (0:00:00.559) 0:07:27.487 ********** 2026-04-05 01:00:32.414595 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.414600 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.414605 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.414609 | orchestrator | 2026-04-05 01:00:32.414614 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:32.414619 | orchestrator | Sunday 05 April 2026 00:56:15 +0000 (0:00:00.309) 0:07:27.797 ********** 2026-04-05 01:00:32.414631 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.414636 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.414641 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.414645 | orchestrator | 2026-04-05 01:00:32.414650 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:32.414655 | orchestrator | Sunday 05 April 2026 00:56:16 +0000 (0:00:01.168) 0:07:28.965 ********** 2026-04-05 01:00:32.414660 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.414664 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.414669 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.414674 | orchestrator | 2026-04-05 01:00:32.414679 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:32.414684 | orchestrator | Sunday 05 April 2026 00:56:17 +0000 (0:00:00.925) 0:07:29.890 ********** 2026-04-05 01:00:32.414689 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.414693 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.414698 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.414703 | orchestrator | 2026-04-05 01:00:32.414708 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:32.414712 | orchestrator | Sunday 05 April 2026 00:56:17 +0000 (0:00:00.811) 0:07:30.702 ********** 2026-04-05 01:00:32.414717 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.414722 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.414727 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.414731 | orchestrator | 2026-04-05 01:00:32.414736 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:32.414741 | orchestrator | Sunday 05 April 2026 00:56:18 +0000 (0:00:00.304) 0:07:31.006 ********** 2026-04-05 01:00:32.414746 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.414751 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.414755 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.414760 | orchestrator | 2026-04-05 01:00:32.414765 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:32.414769 | orchestrator | Sunday 05 April 2026 00:56:18 +0000 (0:00:00.630) 0:07:31.637 ********** 2026-04-05 01:00:32.414774 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.414779 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.414784 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.414788 | orchestrator | 2026-04-05 01:00:32.414793 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:32.414798 | orchestrator | Sunday 05 April 2026 00:56:19 +0000 (0:00:00.338) 0:07:31.976 ********** 2026-04-05 01:00:32.414803 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.414807 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.414812 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.414817 | orchestrator | 2026-04-05 01:00:32.414822 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:32.414832 | orchestrator | Sunday 05 April 2026 00:56:19 +0000 (0:00:00.747) 0:07:32.723 ********** 2026-04-05 01:00:32.414836 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.414845 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.414853 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.414861 | orchestrator | 2026-04-05 01:00:32.414869 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:32.414878 | orchestrator | Sunday 05 April 2026 00:56:20 +0000 (0:00:00.776) 0:07:33.500 ********** 2026-04-05 01:00:32.414886 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.414894 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.414903 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.414912 | orchestrator | 2026-04-05 01:00:32.414921 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:32.414929 | orchestrator | Sunday 05 April 2026 00:56:21 +0000 (0:00:00.616) 0:07:34.116 ********** 2026-04-05 01:00:32.414938 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.414947 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.414963 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.415017 | orchestrator | 2026-04-05 01:00:32.415026 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:32.415035 | orchestrator | Sunday 05 April 2026 00:56:21 +0000 (0:00:00.325) 0:07:34.442 ********** 2026-04-05 01:00:32.415043 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.415051 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.415060 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.415068 | orchestrator | 2026-04-05 01:00:32.415076 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:32.415084 | orchestrator | Sunday 05 April 2026 00:56:22 +0000 (0:00:00.347) 0:07:34.789 ********** 2026-04-05 01:00:32.415091 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.415097 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.415101 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.415106 | orchestrator | 2026-04-05 01:00:32.415111 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:32.415116 | orchestrator | Sunday 05 April 2026 00:56:22 +0000 (0:00:00.340) 0:07:35.129 ********** 2026-04-05 01:00:32.415120 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.415125 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.415130 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.415134 | orchestrator | 2026-04-05 01:00:32.415139 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:32.415144 | orchestrator | Sunday 05 April 2026 00:56:23 +0000 (0:00:00.670) 0:07:35.800 ********** 2026-04-05 01:00:32.415149 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.415153 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.415158 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.415163 | orchestrator | 2026-04-05 01:00:32.415173 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:32.415178 | orchestrator | Sunday 05 April 2026 00:56:23 +0000 (0:00:00.287) 0:07:36.087 ********** 2026-04-05 01:00:32.415183 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.415188 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.415193 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.415197 | orchestrator | 2026-04-05 01:00:32.415202 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:32.415207 | orchestrator | Sunday 05 April 2026 00:56:23 +0000 (0:00:00.276) 0:07:36.363 ********** 2026-04-05 01:00:32.415212 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.415216 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.415221 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.415226 | orchestrator | 2026-04-05 01:00:32.415231 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:32.415235 | orchestrator | Sunday 05 April 2026 00:56:23 +0000 (0:00:00.290) 0:07:36.654 ********** 2026-04-05 01:00:32.415240 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.415245 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.415249 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.415254 | orchestrator | 2026-04-05 01:00:32.415259 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:32.415264 | orchestrator | Sunday 05 April 2026 00:56:24 +0000 (0:00:00.506) 0:07:37.161 ********** 2026-04-05 01:00:32.415268 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.415273 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.415278 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.415282 | orchestrator | 2026-04-05 01:00:32.415287 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-05 01:00:32.415292 | orchestrator | Sunday 05 April 2026 00:56:24 +0000 (0:00:00.554) 0:07:37.715 ********** 2026-04-05 01:00:32.415297 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.415301 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.415306 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.415311 | orchestrator | 2026-04-05 01:00:32.415316 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-05 01:00:32.415326 | orchestrator | Sunday 05 April 2026 00:56:25 +0000 (0:00:00.287) 0:07:38.002 ********** 2026-04-05 01:00:32.415331 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:32.415336 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:32.415341 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:32.415346 | orchestrator | 2026-04-05 01:00:32.415351 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-05 01:00:32.415355 | orchestrator | Sunday 05 April 2026 00:56:26 +0000 (0:00:00.763) 0:07:38.766 ********** 2026-04-05 01:00:32.415360 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.415365 | orchestrator | 2026-04-05 01:00:32.415370 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-05 01:00:32.415374 | orchestrator | Sunday 05 April 2026 00:56:26 +0000 (0:00:00.682) 0:07:39.448 ********** 2026-04-05 01:00:32.415379 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.415383 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.415388 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.415392 | orchestrator | 2026-04-05 01:00:32.415397 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-05 01:00:32.415402 | orchestrator | Sunday 05 April 2026 00:56:26 +0000 (0:00:00.267) 0:07:39.716 ********** 2026-04-05 01:00:32.415407 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.415411 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.415416 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.415420 | orchestrator | 2026-04-05 01:00:32.415425 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-05 01:00:32.415429 | orchestrator | Sunday 05 April 2026 00:56:27 +0000 (0:00:00.473) 0:07:40.189 ********** 2026-04-05 01:00:32.415436 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.415443 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.415450 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.415457 | orchestrator | 2026-04-05 01:00:32.415464 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-05 01:00:32.415470 | orchestrator | Sunday 05 April 2026 00:56:28 +0000 (0:00:01.055) 0:07:41.244 ********** 2026-04-05 01:00:32.415479 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.415487 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.415495 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.415502 | orchestrator | 2026-04-05 01:00:32.415510 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-05 01:00:32.415515 | orchestrator | Sunday 05 April 2026 00:56:28 +0000 (0:00:00.318) 0:07:41.563 ********** 2026-04-05 01:00:32.415520 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 01:00:32.415524 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 01:00:32.415529 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 01:00:32.415534 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 01:00:32.415562 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 01:00:32.415567 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 01:00:32.415572 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 01:00:32.415577 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 01:00:32.415589 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 01:00:32.415594 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 01:00:32.415603 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 01:00:32.415609 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 01:00:32.415616 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 01:00:32.415623 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 01:00:32.415629 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 01:00:32.415636 | orchestrator | 2026-04-05 01:00:32.415643 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-05 01:00:32.415650 | orchestrator | Sunday 05 April 2026 00:56:32 +0000 (0:00:03.335) 0:07:44.899 ********** 2026-04-05 01:00:32.415657 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.415664 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.415671 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.415679 | orchestrator | 2026-04-05 01:00:32.415686 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-05 01:00:32.415694 | orchestrator | Sunday 05 April 2026 00:56:32 +0000 (0:00:00.281) 0:07:45.180 ********** 2026-04-05 01:00:32.415702 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.415708 | orchestrator | 2026-04-05 01:00:32.415713 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-05 01:00:32.415717 | orchestrator | Sunday 05 April 2026 00:56:33 +0000 (0:00:00.643) 0:07:45.824 ********** 2026-04-05 01:00:32.415722 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 01:00:32.415727 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 01:00:32.415731 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 01:00:32.415736 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-05 01:00:32.415741 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-05 01:00:32.415745 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-05 01:00:32.415750 | orchestrator | 2026-04-05 01:00:32.415754 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-05 01:00:32.415759 | orchestrator | Sunday 05 April 2026 00:56:34 +0000 (0:00:00.998) 0:07:46.822 ********** 2026-04-05 01:00:32.415763 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.415768 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:32.415772 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:32.415776 | orchestrator | 2026-04-05 01:00:32.415781 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-05 01:00:32.415785 | orchestrator | Sunday 05 April 2026 00:56:35 +0000 (0:00:01.690) 0:07:48.513 ********** 2026-04-05 01:00:32.415790 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:00:32.415794 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:32.415803 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.415807 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:00:32.415812 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 01:00:32.415816 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.415821 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:00:32.415825 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 01:00:32.415830 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.415834 | orchestrator | 2026-04-05 01:00:32.415839 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-05 01:00:32.415843 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:01.246) 0:07:49.759 ********** 2026-04-05 01:00:32.415852 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:32.415857 | orchestrator | 2026-04-05 01:00:32.415861 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-05 01:00:32.415866 | orchestrator | Sunday 05 April 2026 00:56:39 +0000 (0:00:02.399) 0:07:52.159 ********** 2026-04-05 01:00:32.415872 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.415880 | orchestrator | 2026-04-05 01:00:32.415888 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-05 01:00:32.415896 | orchestrator | Sunday 05 April 2026 00:56:39 +0000 (0:00:00.495) 0:07:52.654 ********** 2026-04-05 01:00:32.415903 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c330a934-8550-546d-8551-a9ce4f4a4f0f', 'data_vg': 'ceph-c330a934-8550-546d-8551-a9ce4f4a4f0f'}) 2026-04-05 01:00:32.415912 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bd7e6aba-230a-5307-afd3-3b474950d4d0', 'data_vg': 'ceph-bd7e6aba-230a-5307-afd3-3b474950d4d0'}) 2026-04-05 01:00:32.415918 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3bb92c70-c222-5380-a7bf-d21f250fcd2a', 'data_vg': 'ceph-3bb92c70-c222-5380-a7bf-d21f250fcd2a'}) 2026-04-05 01:00:32.415924 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-824ea9fd-8e44-5b08-9075-8333765a455e', 'data_vg': 'ceph-824ea9fd-8e44-5b08-9075-8333765a455e'}) 2026-04-05 01:00:32.415938 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-377d1900-3c05-5c55-820b-3d4ba19b512c', 'data_vg': 'ceph-377d1900-3c05-5c55-820b-3d4ba19b512c'}) 2026-04-05 01:00:32.415946 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ffa9e237-b4c6-554d-9530-d8db42979c07', 'data_vg': 'ceph-ffa9e237-b4c6-554d-9530-d8db42979c07'}) 2026-04-05 01:00:32.415953 | orchestrator | 2026-04-05 01:00:32.415960 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-05 01:00:32.415987 | orchestrator | Sunday 05 April 2026 00:57:20 +0000 (0:00:40.461) 0:08:33.115 ********** 2026-04-05 01:00:32.415995 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416002 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.416010 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.416017 | orchestrator | 2026-04-05 01:00:32.416021 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-05 01:00:32.416026 | orchestrator | Sunday 05 April 2026 00:57:20 +0000 (0:00:00.604) 0:08:33.720 ********** 2026-04-05 01:00:32.416030 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.416035 | orchestrator | 2026-04-05 01:00:32.416039 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-05 01:00:32.416044 | orchestrator | Sunday 05 April 2026 00:57:21 +0000 (0:00:00.572) 0:08:34.292 ********** 2026-04-05 01:00:32.416050 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.416058 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.416065 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.416073 | orchestrator | 2026-04-05 01:00:32.416080 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-05 01:00:32.416088 | orchestrator | Sunday 05 April 2026 00:57:22 +0000 (0:00:00.703) 0:08:34.996 ********** 2026-04-05 01:00:32.416095 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.416103 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.416111 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.416119 | orchestrator | 2026-04-05 01:00:32.416126 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-05 01:00:32.416134 | orchestrator | Sunday 05 April 2026 00:57:24 +0000 (0:00:01.804) 0:08:36.800 ********** 2026-04-05 01:00:32.416140 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.416145 | orchestrator | 2026-04-05 01:00:32.416150 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-05 01:00:32.416160 | orchestrator | Sunday 05 April 2026 00:57:24 +0000 (0:00:00.626) 0:08:37.427 ********** 2026-04-05 01:00:32.416164 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.416169 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.416173 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.416178 | orchestrator | 2026-04-05 01:00:32.416182 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-05 01:00:32.416187 | orchestrator | Sunday 05 April 2026 00:57:25 +0000 (0:00:01.261) 0:08:38.688 ********** 2026-04-05 01:00:32.416191 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.416196 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.416200 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.416205 | orchestrator | 2026-04-05 01:00:32.416209 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-05 01:00:32.416214 | orchestrator | Sunday 05 April 2026 00:57:27 +0000 (0:00:01.532) 0:08:40.220 ********** 2026-04-05 01:00:32.416218 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.416223 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.416231 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.416236 | orchestrator | 2026-04-05 01:00:32.416241 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-05 01:00:32.416245 | orchestrator | Sunday 05 April 2026 00:57:29 +0000 (0:00:01.920) 0:08:42.141 ********** 2026-04-05 01:00:32.416250 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416254 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.416259 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.416263 | orchestrator | 2026-04-05 01:00:32.416268 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-05 01:00:32.416272 | orchestrator | Sunday 05 April 2026 00:57:29 +0000 (0:00:00.432) 0:08:42.574 ********** 2026-04-05 01:00:32.416277 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416281 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.416286 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.416290 | orchestrator | 2026-04-05 01:00:32.416295 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-05 01:00:32.416299 | orchestrator | Sunday 05 April 2026 00:57:30 +0000 (0:00:00.413) 0:08:42.987 ********** 2026-04-05 01:00:32.416304 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-05 01:00:32.416308 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-05 01:00:32.416313 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 01:00:32.416317 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-05 01:00:32.416321 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-05 01:00:32.416326 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-05 01:00:32.416330 | orchestrator | 2026-04-05 01:00:32.416335 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-05 01:00:32.416339 | orchestrator | Sunday 05 April 2026 00:57:31 +0000 (0:00:01.485) 0:08:44.473 ********** 2026-04-05 01:00:32.416344 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-05 01:00:32.416361 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-05 01:00:32.416366 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-05 01:00:32.416370 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-05 01:00:32.416375 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-05 01:00:32.416379 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-05 01:00:32.416384 | orchestrator | 2026-04-05 01:00:32.416388 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-05 01:00:32.416393 | orchestrator | Sunday 05 April 2026 00:57:33 +0000 (0:00:02.218) 0:08:46.692 ********** 2026-04-05 01:00:32.416398 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-05 01:00:32.416402 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-05 01:00:32.416411 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-05 01:00:32.416415 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-05 01:00:32.416424 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-05 01:00:32.416428 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-05 01:00:32.416433 | orchestrator | 2026-04-05 01:00:32.416438 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-05 01:00:32.416442 | orchestrator | Sunday 05 April 2026 00:57:37 +0000 (0:00:03.638) 0:08:50.330 ********** 2026-04-05 01:00:32.416447 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416451 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.416456 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:32.416460 | orchestrator | 2026-04-05 01:00:32.416465 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-05 01:00:32.416469 | orchestrator | Sunday 05 April 2026 00:57:40 +0000 (0:00:02.511) 0:08:52.841 ********** 2026-04-05 01:00:32.416474 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416479 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.416530 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-05 01:00:32.416540 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:32.416547 | orchestrator | 2026-04-05 01:00:32.416555 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-05 01:00:32.416563 | orchestrator | Sunday 05 April 2026 00:57:53 +0000 (0:00:13.047) 0:09:05.888 ********** 2026-04-05 01:00:32.416570 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416578 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.416585 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.416593 | orchestrator | 2026-04-05 01:00:32.416600 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:32.416608 | orchestrator | Sunday 05 April 2026 00:57:54 +0000 (0:00:00.934) 0:09:06.823 ********** 2026-04-05 01:00:32.416615 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416623 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.416630 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.416637 | orchestrator | 2026-04-05 01:00:32.416644 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-05 01:00:32.416651 | orchestrator | Sunday 05 April 2026 00:57:54 +0000 (0:00:00.809) 0:09:07.633 ********** 2026-04-05 01:00:32.416658 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.416665 | orchestrator | 2026-04-05 01:00:32.416673 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-05 01:00:32.416681 | orchestrator | Sunday 05 April 2026 00:57:55 +0000 (0:00:00.569) 0:09:08.202 ********** 2026-04-05 01:00:32.416688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.416696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.416704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.416713 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416718 | orchestrator | 2026-04-05 01:00:32.416722 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-05 01:00:32.416727 | orchestrator | Sunday 05 April 2026 00:57:55 +0000 (0:00:00.466) 0:09:08.668 ********** 2026-04-05 01:00:32.416731 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416740 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.416745 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.416749 | orchestrator | 2026-04-05 01:00:32.416754 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-05 01:00:32.416758 | orchestrator | Sunday 05 April 2026 00:57:56 +0000 (0:00:00.347) 0:09:09.016 ********** 2026-04-05 01:00:32.416763 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416767 | orchestrator | 2026-04-05 01:00:32.416772 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-05 01:00:32.416786 | orchestrator | Sunday 05 April 2026 00:57:57 +0000 (0:00:00.851) 0:09:09.867 ********** 2026-04-05 01:00:32.416790 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416795 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.416800 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.416804 | orchestrator | 2026-04-05 01:00:32.416809 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-05 01:00:32.416813 | orchestrator | Sunday 05 April 2026 00:57:57 +0000 (0:00:00.341) 0:09:10.208 ********** 2026-04-05 01:00:32.416818 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416822 | orchestrator | 2026-04-05 01:00:32.416827 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-05 01:00:32.416831 | orchestrator | Sunday 05 April 2026 00:57:57 +0000 (0:00:00.294) 0:09:10.503 ********** 2026-04-05 01:00:32.416836 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416840 | orchestrator | 2026-04-05 01:00:32.416845 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-05 01:00:32.416850 | orchestrator | Sunday 05 April 2026 00:57:58 +0000 (0:00:00.251) 0:09:10.755 ********** 2026-04-05 01:00:32.416854 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416859 | orchestrator | 2026-04-05 01:00:32.416863 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-05 01:00:32.416868 | orchestrator | Sunday 05 April 2026 00:57:58 +0000 (0:00:00.132) 0:09:10.887 ********** 2026-04-05 01:00:32.416872 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416877 | orchestrator | 2026-04-05 01:00:32.416881 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-05 01:00:32.416886 | orchestrator | Sunday 05 April 2026 00:57:58 +0000 (0:00:00.239) 0:09:11.127 ********** 2026-04-05 01:00:32.416891 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416895 | orchestrator | 2026-04-05 01:00:32.416900 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-05 01:00:32.416904 | orchestrator | Sunday 05 April 2026 00:57:58 +0000 (0:00:00.258) 0:09:11.386 ********** 2026-04-05 01:00:32.416914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.416919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.416923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.416928 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416932 | orchestrator | 2026-04-05 01:00:32.416937 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-05 01:00:32.416942 | orchestrator | Sunday 05 April 2026 00:57:59 +0000 (0:00:00.438) 0:09:11.825 ********** 2026-04-05 01:00:32.416946 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.416951 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.416955 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.416960 | orchestrator | 2026-04-05 01:00:32.416964 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-05 01:00:32.416987 | orchestrator | Sunday 05 April 2026 00:57:59 +0000 (0:00:00.633) 0:09:12.458 ********** 2026-04-05 01:00:32.416995 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.417002 | orchestrator | 2026-04-05 01:00:32.417009 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-05 01:00:32.417016 | orchestrator | Sunday 05 April 2026 00:57:59 +0000 (0:00:00.236) 0:09:12.695 ********** 2026-04-05 01:00:32.417023 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.417030 | orchestrator | 2026-04-05 01:00:32.417037 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-05 01:00:32.417043 | orchestrator | 2026-04-05 01:00:32.417051 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:32.417058 | orchestrator | Sunday 05 April 2026 00:58:00 +0000 (0:00:00.689) 0:09:13.385 ********** 2026-04-05 01:00:32.417065 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.417080 | orchestrator | 2026-04-05 01:00:32.417086 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:32.417093 | orchestrator | Sunday 05 April 2026 00:58:01 +0000 (0:00:01.310) 0:09:14.695 ********** 2026-04-05 01:00:32.417100 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.417107 | orchestrator | 2026-04-05 01:00:32.417115 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:32.417122 | orchestrator | Sunday 05 April 2026 00:58:03 +0000 (0:00:01.334) 0:09:16.030 ********** 2026-04-05 01:00:32.417129 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.417135 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.417142 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.417150 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.417157 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.417164 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.417172 | orchestrator | 2026-04-05 01:00:32.417180 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:32.417187 | orchestrator | Sunday 05 April 2026 00:58:04 +0000 (0:00:01.362) 0:09:17.392 ********** 2026-04-05 01:00:32.417194 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.417201 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.417208 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.417215 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.417228 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.417236 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.417243 | orchestrator | 2026-04-05 01:00:32.417251 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:32.417258 | orchestrator | Sunday 05 April 2026 00:58:05 +0000 (0:00:00.839) 0:09:18.232 ********** 2026-04-05 01:00:32.417266 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.417273 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.417280 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.417288 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.417296 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.417304 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.417312 | orchestrator | 2026-04-05 01:00:32.417319 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:32.417327 | orchestrator | Sunday 05 April 2026 00:58:06 +0000 (0:00:01.031) 0:09:19.263 ********** 2026-04-05 01:00:32.417336 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.417340 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.417345 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.417349 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.417354 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.417358 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.417363 | orchestrator | 2026-04-05 01:00:32.417367 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:32.417372 | orchestrator | Sunday 05 April 2026 00:58:07 +0000 (0:00:00.904) 0:09:20.168 ********** 2026-04-05 01:00:32.417376 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.417381 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.417385 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.417390 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.417394 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.417399 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.417403 | orchestrator | 2026-04-05 01:00:32.417408 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:32.417412 | orchestrator | Sunday 05 April 2026 00:58:08 +0000 (0:00:01.388) 0:09:21.556 ********** 2026-04-05 01:00:32.417417 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.417421 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.417431 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.417435 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.417440 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.417444 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.417449 | orchestrator | 2026-04-05 01:00:32.417453 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:32.417457 | orchestrator | Sunday 05 April 2026 00:58:09 +0000 (0:00:00.642) 0:09:22.199 ********** 2026-04-05 01:00:32.417462 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.417473 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.417478 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.417482 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.417487 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.417491 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.417496 | orchestrator | 2026-04-05 01:00:32.417500 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:32.417505 | orchestrator | Sunday 05 April 2026 00:58:10 +0000 (0:00:01.001) 0:09:23.201 ********** 2026-04-05 01:00:32.417509 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.417514 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.417519 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.417523 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.417528 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.417532 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.417536 | orchestrator | 2026-04-05 01:00:32.417541 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:32.417546 | orchestrator | Sunday 05 April 2026 00:58:11 +0000 (0:00:01.246) 0:09:24.447 ********** 2026-04-05 01:00:32.417550 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.417555 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.417559 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.417564 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.417568 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.417573 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.417577 | orchestrator | 2026-04-05 01:00:32.417582 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:32.417586 | orchestrator | Sunday 05 April 2026 00:58:12 +0000 (0:00:01.126) 0:09:25.573 ********** 2026-04-05 01:00:32.417591 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.417597 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.417604 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.417612 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.417620 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.417626 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.417633 | orchestrator | 2026-04-05 01:00:32.417640 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:32.417646 | orchestrator | Sunday 05 April 2026 00:58:13 +0000 (0:00:01.056) 0:09:26.630 ********** 2026-04-05 01:00:32.417653 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.417660 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.417666 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.417674 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.417680 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.417686 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.417692 | orchestrator | 2026-04-05 01:00:32.417699 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:32.417705 | orchestrator | Sunday 05 April 2026 00:58:14 +0000 (0:00:00.680) 0:09:27.310 ********** 2026-04-05 01:00:32.417712 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.417718 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.417725 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.417732 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.417739 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.417745 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.417752 | orchestrator | 2026-04-05 01:00:32.417769 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:32.417777 | orchestrator | Sunday 05 April 2026 00:58:15 +0000 (0:00:00.870) 0:09:28.181 ********** 2026-04-05 01:00:32.417784 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.417791 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.417798 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.417805 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.417818 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.417825 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.417831 | orchestrator | 2026-04-05 01:00:32.417838 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:32.417845 | orchestrator | Sunday 05 April 2026 00:58:16 +0000 (0:00:00.707) 0:09:28.888 ********** 2026-04-05 01:00:32.417852 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.417858 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.417865 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.417872 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.417878 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.417885 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.417892 | orchestrator | 2026-04-05 01:00:32.417898 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:32.417905 | orchestrator | Sunday 05 April 2026 00:58:17 +0000 (0:00:00.923) 0:09:29.812 ********** 2026-04-05 01:00:32.417912 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.417919 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.417926 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.417933 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.417940 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.417946 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.417953 | orchestrator | 2026-04-05 01:00:32.417960 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:32.417966 | orchestrator | Sunday 05 April 2026 00:58:17 +0000 (0:00:00.615) 0:09:30.427 ********** 2026-04-05 01:00:32.418143 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.418152 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.418158 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.418163 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:32.418168 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:32.418172 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:32.418177 | orchestrator | 2026-04-05 01:00:32.418181 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:32.418186 | orchestrator | Sunday 05 April 2026 00:58:18 +0000 (0:00:00.920) 0:09:31.348 ********** 2026-04-05 01:00:32.418191 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.418195 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.418200 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.418204 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.418209 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.418213 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.418218 | orchestrator | 2026-04-05 01:00:32.418222 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:32.418238 | orchestrator | Sunday 05 April 2026 00:58:19 +0000 (0:00:00.653) 0:09:32.001 ********** 2026-04-05 01:00:32.418243 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.418248 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.418252 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.418257 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.418262 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.418266 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.418271 | orchestrator | 2026-04-05 01:00:32.418276 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:32.418280 | orchestrator | Sunday 05 April 2026 00:58:20 +0000 (0:00:01.026) 0:09:33.028 ********** 2026-04-05 01:00:32.418285 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.418297 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.418301 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.418306 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.418311 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.418315 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.418320 | orchestrator | 2026-04-05 01:00:32.418324 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-05 01:00:32.418329 | orchestrator | Sunday 05 April 2026 00:58:21 +0000 (0:00:01.353) 0:09:34.381 ********** 2026-04-05 01:00:32.418334 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:32.418338 | orchestrator | 2026-04-05 01:00:32.418343 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-05 01:00:32.418348 | orchestrator | Sunday 05 April 2026 00:58:24 +0000 (0:00:03.333) 0:09:37.715 ********** 2026-04-05 01:00:32.418352 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:32.418357 | orchestrator | 2026-04-05 01:00:32.418362 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-05 01:00:32.418366 | orchestrator | Sunday 05 April 2026 00:58:26 +0000 (0:00:01.705) 0:09:39.420 ********** 2026-04-05 01:00:32.418371 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.418375 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.418380 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.418384 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.418389 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.418394 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.418398 | orchestrator | 2026-04-05 01:00:32.418403 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-05 01:00:32.418407 | orchestrator | Sunday 05 April 2026 00:58:28 +0000 (0:00:01.629) 0:09:41.050 ********** 2026-04-05 01:00:32.418412 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.418416 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.418421 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.418425 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.418430 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.418434 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.418439 | orchestrator | 2026-04-05 01:00:32.418444 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-05 01:00:32.418448 | orchestrator | Sunday 05 April 2026 00:58:29 +0000 (0:00:01.403) 0:09:42.453 ********** 2026-04-05 01:00:32.418454 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.418460 | orchestrator | 2026-04-05 01:00:32.418465 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-05 01:00:32.418469 | orchestrator | Sunday 05 April 2026 00:58:31 +0000 (0:00:01.350) 0:09:43.803 ********** 2026-04-05 01:00:32.418474 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.418483 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.418487 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.418492 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.418497 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.418501 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.418506 | orchestrator | 2026-04-05 01:00:32.418510 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-05 01:00:32.418515 | orchestrator | Sunday 05 April 2026 00:58:33 +0000 (0:00:02.024) 0:09:45.828 ********** 2026-04-05 01:00:32.418520 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.418524 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.418529 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.418533 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.418538 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.418542 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.418547 | orchestrator | 2026-04-05 01:00:32.418551 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-05 01:00:32.418562 | orchestrator | Sunday 05 April 2026 00:58:36 +0000 (0:00:03.882) 0:09:49.710 ********** 2026-04-05 01:00:32.418567 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:32.418571 | orchestrator | 2026-04-05 01:00:32.418576 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-05 01:00:32.418580 | orchestrator | Sunday 05 April 2026 00:58:38 +0000 (0:00:01.300) 0:09:51.011 ********** 2026-04-05 01:00:32.418585 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.418590 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.418594 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.418599 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.418603 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.418608 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.418612 | orchestrator | 2026-04-05 01:00:32.418617 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-05 01:00:32.418621 | orchestrator | Sunday 05 April 2026 00:58:38 +0000 (0:00:00.668) 0:09:51.680 ********** 2026-04-05 01:00:32.418626 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.418630 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.418634 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.418638 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:32.418642 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:32.418646 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:32.418650 | orchestrator | 2026-04-05 01:00:32.418655 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-05 01:00:32.418661 | orchestrator | Sunday 05 April 2026 00:58:41 +0000 (0:00:02.672) 0:09:54.353 ********** 2026-04-05 01:00:32.418666 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.418670 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.418674 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.418678 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:32.418682 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:32.418686 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:32.418690 | orchestrator | 2026-04-05 01:00:32.418695 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-05 01:00:32.418699 | orchestrator | 2026-04-05 01:00:32.418703 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:32.418707 | orchestrator | Sunday 05 April 2026 00:58:42 +0000 (0:00:01.126) 0:09:55.479 ********** 2026-04-05 01:00:32.418712 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.418716 | orchestrator | 2026-04-05 01:00:32.418720 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:32.418724 | orchestrator | Sunday 05 April 2026 00:58:43 +0000 (0:00:00.539) 0:09:56.018 ********** 2026-04-05 01:00:32.418728 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.418732 | orchestrator | 2026-04-05 01:00:32.418736 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:32.418741 | orchestrator | Sunday 05 April 2026 00:58:44 +0000 (0:00:00.799) 0:09:56.817 ********** 2026-04-05 01:00:32.418745 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.418749 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.418753 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.418757 | orchestrator | 2026-04-05 01:00:32.418761 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:32.418765 | orchestrator | Sunday 05 April 2026 00:58:44 +0000 (0:00:00.427) 0:09:57.244 ********** 2026-04-05 01:00:32.418769 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.418773 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.418777 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.418785 | orchestrator | 2026-04-05 01:00:32.418789 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:32.418793 | orchestrator | Sunday 05 April 2026 00:58:45 +0000 (0:00:00.850) 0:09:58.095 ********** 2026-04-05 01:00:32.418798 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.418802 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.418806 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.418810 | orchestrator | 2026-04-05 01:00:32.418814 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:32.418818 | orchestrator | Sunday 05 April 2026 00:58:46 +0000 (0:00:00.697) 0:09:58.793 ********** 2026-04-05 01:00:32.418822 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.418826 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.418830 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.418835 | orchestrator | 2026-04-05 01:00:32.418839 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:32.418843 | orchestrator | Sunday 05 April 2026 00:58:47 +0000 (0:00:01.027) 0:09:59.821 ********** 2026-04-05 01:00:32.418847 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.418851 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.418855 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.418859 | orchestrator | 2026-04-05 01:00:32.418863 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:32.418870 | orchestrator | Sunday 05 April 2026 00:58:47 +0000 (0:00:00.330) 0:10:00.152 ********** 2026-04-05 01:00:32.418874 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.418878 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.418882 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.418887 | orchestrator | 2026-04-05 01:00:32.418891 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:32.418895 | orchestrator | Sunday 05 April 2026 00:58:47 +0000 (0:00:00.310) 0:10:00.462 ********** 2026-04-05 01:00:32.418899 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.418903 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.418907 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.418911 | orchestrator | 2026-04-05 01:00:32.418915 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:32.418919 | orchestrator | Sunday 05 April 2026 00:58:48 +0000 (0:00:00.346) 0:10:00.808 ********** 2026-04-05 01:00:32.418924 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.418928 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.418932 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.418936 | orchestrator | 2026-04-05 01:00:32.418940 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:32.418944 | orchestrator | Sunday 05 April 2026 00:58:48 +0000 (0:00:00.768) 0:10:01.577 ********** 2026-04-05 01:00:32.418948 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.418952 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.418956 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.418960 | orchestrator | 2026-04-05 01:00:32.418965 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:32.418987 | orchestrator | Sunday 05 April 2026 00:58:49 +0000 (0:00:01.060) 0:10:02.637 ********** 2026-04-05 01:00:32.418991 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.418996 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.419000 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.419004 | orchestrator | 2026-04-05 01:00:32.419008 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:32.419012 | orchestrator | Sunday 05 April 2026 00:58:50 +0000 (0:00:00.331) 0:10:02.968 ********** 2026-04-05 01:00:32.419016 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.419021 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.419025 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.419029 | orchestrator | 2026-04-05 01:00:32.419033 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:32.419041 | orchestrator | Sunday 05 April 2026 00:58:50 +0000 (0:00:00.326) 0:10:03.294 ********** 2026-04-05 01:00:32.419045 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.419049 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.419056 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.419060 | orchestrator | 2026-04-05 01:00:32.419065 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:32.419069 | orchestrator | Sunday 05 April 2026 00:58:50 +0000 (0:00:00.316) 0:10:03.611 ********** 2026-04-05 01:00:32.419073 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.419077 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.419081 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.419085 | orchestrator | 2026-04-05 01:00:32.419089 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:32.419094 | orchestrator | Sunday 05 April 2026 00:58:51 +0000 (0:00:00.654) 0:10:04.266 ********** 2026-04-05 01:00:32.419098 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.419102 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.419106 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.419110 | orchestrator | 2026-04-05 01:00:32.419114 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:32.419118 | orchestrator | Sunday 05 April 2026 00:58:51 +0000 (0:00:00.403) 0:10:04.669 ********** 2026-04-05 01:00:32.419123 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.419127 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.419131 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.419135 | orchestrator | 2026-04-05 01:00:32.419139 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:32.419143 | orchestrator | Sunday 05 April 2026 00:58:52 +0000 (0:00:00.331) 0:10:05.001 ********** 2026-04-05 01:00:32.419147 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.419152 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.419156 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.419160 | orchestrator | 2026-04-05 01:00:32.419164 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:32.419168 | orchestrator | Sunday 05 April 2026 00:58:52 +0000 (0:00:00.319) 0:10:05.320 ********** 2026-04-05 01:00:32.419172 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.419176 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.419180 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.419185 | orchestrator | 2026-04-05 01:00:32.419189 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:32.419193 | orchestrator | Sunday 05 April 2026 00:58:53 +0000 (0:00:00.656) 0:10:05.977 ********** 2026-04-05 01:00:32.419197 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.419201 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.419205 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.419209 | orchestrator | 2026-04-05 01:00:32.419214 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:32.419218 | orchestrator | Sunday 05 April 2026 00:58:53 +0000 (0:00:00.369) 0:10:06.347 ********** 2026-04-05 01:00:32.419225 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.419230 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.419234 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.419238 | orchestrator | 2026-04-05 01:00:32.419242 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-05 01:00:32.419246 | orchestrator | Sunday 05 April 2026 00:58:54 +0000 (0:00:00.545) 0:10:06.892 ********** 2026-04-05 01:00:32.419250 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.419255 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.419259 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-05 01:00:32.419263 | orchestrator | 2026-04-05 01:00:32.419267 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-05 01:00:32.419274 | orchestrator | Sunday 05 April 2026 00:58:54 +0000 (0:00:00.732) 0:10:07.625 ********** 2026-04-05 01:00:32.419282 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:32.419286 | orchestrator | 2026-04-05 01:00:32.419290 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-05 01:00:32.419294 | orchestrator | Sunday 05 April 2026 00:58:56 +0000 (0:00:01.768) 0:10:09.393 ********** 2026-04-05 01:00:32.419300 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-05 01:00:32.419306 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.419310 | orchestrator | 2026-04-05 01:00:32.419315 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-05 01:00:32.419319 | orchestrator | Sunday 05 April 2026 00:58:56 +0000 (0:00:00.213) 0:10:09.607 ********** 2026-04-05 01:00:32.419325 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:00:32.419332 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:00:32.419336 | orchestrator | 2026-04-05 01:00:32.419340 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-05 01:00:32.419345 | orchestrator | Sunday 05 April 2026 00:59:03 +0000 (0:00:06.606) 0:10:16.213 ********** 2026-04-05 01:00:32.419349 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:32.419353 | orchestrator | 2026-04-05 01:00:32.419357 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-05 01:00:32.419361 | orchestrator | Sunday 05 April 2026 00:59:06 +0000 (0:00:02.763) 0:10:18.977 ********** 2026-04-05 01:00:32.419368 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.419373 | orchestrator | 2026-04-05 01:00:32.419377 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-05 01:00:32.419381 | orchestrator | Sunday 05 April 2026 00:59:07 +0000 (0:00:00.841) 0:10:19.818 ********** 2026-04-05 01:00:32.419385 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 01:00:32.419389 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 01:00:32.419393 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 01:00:32.419397 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-05 01:00:32.419401 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-05 01:00:32.419405 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-05 01:00:32.419410 | orchestrator | 2026-04-05 01:00:32.419414 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-05 01:00:32.419418 | orchestrator | Sunday 05 April 2026 00:59:08 +0000 (0:00:01.109) 0:10:20.928 ********** 2026-04-05 01:00:32.419422 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.419426 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:32.419431 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:32.419435 | orchestrator | 2026-04-05 01:00:32.419439 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-05 01:00:32.419443 | orchestrator | Sunday 05 April 2026 00:59:10 +0000 (0:00:01.903) 0:10:22.831 ********** 2026-04-05 01:00:32.419447 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:00:32.419456 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:32.419460 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.419464 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:00:32.419468 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 01:00:32.419472 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.419476 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:00:32.419480 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 01:00:32.419484 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.419488 | orchestrator | 2026-04-05 01:00:32.419493 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-05 01:00:32.419497 | orchestrator | Sunday 05 April 2026 00:59:11 +0000 (0:00:01.210) 0:10:24.041 ********** 2026-04-05 01:00:32.419501 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.419505 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.419509 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.419513 | orchestrator | 2026-04-05 01:00:32.419517 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-05 01:00:32.419521 | orchestrator | Sunday 05 April 2026 00:59:13 +0000 (0:00:02.437) 0:10:26.478 ********** 2026-04-05 01:00:32.419525 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.419529 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.419533 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.419537 | orchestrator | 2026-04-05 01:00:32.419541 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-05 01:00:32.419545 | orchestrator | Sunday 05 April 2026 00:59:14 +0000 (0:00:00.331) 0:10:26.810 ********** 2026-04-05 01:00:32.419556 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.419560 | orchestrator | 2026-04-05 01:00:32.419564 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-05 01:00:32.419568 | orchestrator | Sunday 05 April 2026 00:59:14 +0000 (0:00:00.556) 0:10:27.366 ********** 2026-04-05 01:00:32.419572 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.419576 | orchestrator | 2026-04-05 01:00:32.419580 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-05 01:00:32.419584 | orchestrator | Sunday 05 April 2026 00:59:15 +0000 (0:00:00.883) 0:10:28.249 ********** 2026-04-05 01:00:32.419588 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.419592 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.419596 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.419601 | orchestrator | 2026-04-05 01:00:32.419605 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-05 01:00:32.419609 | orchestrator | Sunday 05 April 2026 00:59:16 +0000 (0:00:01.258) 0:10:29.508 ********** 2026-04-05 01:00:32.419613 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.419617 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.419621 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.419625 | orchestrator | 2026-04-05 01:00:32.419629 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-05 01:00:32.419633 | orchestrator | Sunday 05 April 2026 00:59:18 +0000 (0:00:01.285) 0:10:30.793 ********** 2026-04-05 01:00:32.419637 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.419641 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.419645 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.419649 | orchestrator | 2026-04-05 01:00:32.419653 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-05 01:00:32.419657 | orchestrator | Sunday 05 April 2026 00:59:20 +0000 (0:00:02.212) 0:10:33.006 ********** 2026-04-05 01:00:32.419661 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.419665 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.419670 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.419677 | orchestrator | 2026-04-05 01:00:32.419681 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-05 01:00:32.419685 | orchestrator | Sunday 05 April 2026 00:59:22 +0000 (0:00:01.984) 0:10:34.990 ********** 2026-04-05 01:00:32.419689 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.419694 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.419698 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.419702 | orchestrator | 2026-04-05 01:00:32.419708 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:32.419713 | orchestrator | Sunday 05 April 2026 00:59:23 +0000 (0:00:01.560) 0:10:36.551 ********** 2026-04-05 01:00:32.419717 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.419721 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.419725 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.419729 | orchestrator | 2026-04-05 01:00:32.419733 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-05 01:00:32.419737 | orchestrator | Sunday 05 April 2026 00:59:24 +0000 (0:00:00.719) 0:10:37.271 ********** 2026-04-05 01:00:32.419741 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.419745 | orchestrator | 2026-04-05 01:00:32.419750 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-05 01:00:32.419754 | orchestrator | Sunday 05 April 2026 00:59:25 +0000 (0:00:00.562) 0:10:37.834 ********** 2026-04-05 01:00:32.419758 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.419762 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.419766 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.419770 | orchestrator | 2026-04-05 01:00:32.419774 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-05 01:00:32.419778 | orchestrator | Sunday 05 April 2026 00:59:25 +0000 (0:00:00.583) 0:10:38.417 ********** 2026-04-05 01:00:32.419782 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.419786 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.419790 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.419795 | orchestrator | 2026-04-05 01:00:32.419799 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-05 01:00:32.419803 | orchestrator | Sunday 05 April 2026 00:59:26 +0000 (0:00:01.251) 0:10:39.668 ********** 2026-04-05 01:00:32.419807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.419811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.419815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.419819 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.419823 | orchestrator | 2026-04-05 01:00:32.419827 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-05 01:00:32.419831 | orchestrator | Sunday 05 April 2026 00:59:27 +0000 (0:00:00.638) 0:10:40.307 ********** 2026-04-05 01:00:32.419835 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.419839 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.419843 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.419847 | orchestrator | 2026-04-05 01:00:32.419852 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-05 01:00:32.419856 | orchestrator | 2026-04-05 01:00:32.419860 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:32.419864 | orchestrator | Sunday 05 April 2026 00:59:28 +0000 (0:00:00.599) 0:10:40.906 ********** 2026-04-05 01:00:32.419868 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.419872 | orchestrator | 2026-04-05 01:00:32.419876 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:32.419880 | orchestrator | Sunday 05 April 2026 00:59:28 +0000 (0:00:00.811) 0:10:41.717 ********** 2026-04-05 01:00:32.419887 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.419894 | orchestrator | 2026-04-05 01:00:32.419899 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:32.419903 | orchestrator | Sunday 05 April 2026 00:59:29 +0000 (0:00:00.547) 0:10:42.264 ********** 2026-04-05 01:00:32.419907 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.419911 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.419915 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.419919 | orchestrator | 2026-04-05 01:00:32.419923 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:32.419927 | orchestrator | Sunday 05 April 2026 00:59:30 +0000 (0:00:00.544) 0:10:42.809 ********** 2026-04-05 01:00:32.419931 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.419935 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.419940 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.419944 | orchestrator | 2026-04-05 01:00:32.419948 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:32.419952 | orchestrator | Sunday 05 April 2026 00:59:30 +0000 (0:00:00.789) 0:10:43.598 ********** 2026-04-05 01:00:32.419956 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.419960 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.419964 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.419978 | orchestrator | 2026-04-05 01:00:32.419983 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:32.419987 | orchestrator | Sunday 05 April 2026 00:59:31 +0000 (0:00:00.768) 0:10:44.367 ********** 2026-04-05 01:00:32.419991 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.419995 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.419999 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.420003 | orchestrator | 2026-04-05 01:00:32.420007 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:32.420011 | orchestrator | Sunday 05 April 2026 00:59:32 +0000 (0:00:00.743) 0:10:45.111 ********** 2026-04-05 01:00:32.420015 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.420020 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420024 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.420028 | orchestrator | 2026-04-05 01:00:32.420032 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:32.420036 | orchestrator | Sunday 05 April 2026 00:59:33 +0000 (0:00:00.650) 0:10:45.761 ********** 2026-04-05 01:00:32.420040 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420044 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.420048 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.420052 | orchestrator | 2026-04-05 01:00:32.420057 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:32.420064 | orchestrator | Sunday 05 April 2026 00:59:33 +0000 (0:00:00.368) 0:10:46.130 ********** 2026-04-05 01:00:32.420068 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420072 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.420076 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.420080 | orchestrator | 2026-04-05 01:00:32.420085 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:32.420089 | orchestrator | Sunday 05 April 2026 00:59:33 +0000 (0:00:00.292) 0:10:46.422 ********** 2026-04-05 01:00:32.420093 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.420097 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.420101 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.420105 | orchestrator | 2026-04-05 01:00:32.420109 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:32.420113 | orchestrator | Sunday 05 April 2026 00:59:34 +0000 (0:00:00.718) 0:10:47.141 ********** 2026-04-05 01:00:32.420117 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.420134 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.420139 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.420143 | orchestrator | 2026-04-05 01:00:32.420150 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:32.420155 | orchestrator | Sunday 05 April 2026 00:59:35 +0000 (0:00:01.006) 0:10:48.148 ********** 2026-04-05 01:00:32.420159 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420163 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.420167 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.420171 | orchestrator | 2026-04-05 01:00:32.420175 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:32.420179 | orchestrator | Sunday 05 April 2026 00:59:35 +0000 (0:00:00.344) 0:10:48.492 ********** 2026-04-05 01:00:32.420184 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420188 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.420192 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.420196 | orchestrator | 2026-04-05 01:00:32.420200 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:32.420204 | orchestrator | Sunday 05 April 2026 00:59:36 +0000 (0:00:00.330) 0:10:48.823 ********** 2026-04-05 01:00:32.420208 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.420212 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.420216 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.420220 | orchestrator | 2026-04-05 01:00:32.420224 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:32.420228 | orchestrator | Sunday 05 April 2026 00:59:36 +0000 (0:00:00.325) 0:10:49.149 ********** 2026-04-05 01:00:32.420232 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.420236 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.420240 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.420244 | orchestrator | 2026-04-05 01:00:32.420249 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:32.420253 | orchestrator | Sunday 05 April 2026 00:59:37 +0000 (0:00:00.622) 0:10:49.772 ********** 2026-04-05 01:00:32.420257 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.420261 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.420265 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.420269 | orchestrator | 2026-04-05 01:00:32.420273 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:32.420277 | orchestrator | Sunday 05 April 2026 00:59:37 +0000 (0:00:00.354) 0:10:50.126 ********** 2026-04-05 01:00:32.420281 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420285 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.420290 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.420294 | orchestrator | 2026-04-05 01:00:32.420300 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:32.420305 | orchestrator | Sunday 05 April 2026 00:59:37 +0000 (0:00:00.350) 0:10:50.476 ********** 2026-04-05 01:00:32.420309 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420313 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.420317 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.420321 | orchestrator | 2026-04-05 01:00:32.420325 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:32.420329 | orchestrator | Sunday 05 April 2026 00:59:38 +0000 (0:00:00.305) 0:10:50.781 ********** 2026-04-05 01:00:32.420333 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420338 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.420342 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.420346 | orchestrator | 2026-04-05 01:00:32.420350 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:32.420354 | orchestrator | Sunday 05 April 2026 00:59:38 +0000 (0:00:00.568) 0:10:51.350 ********** 2026-04-05 01:00:32.420358 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.420362 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.420366 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.420370 | orchestrator | 2026-04-05 01:00:32.420374 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:32.420382 | orchestrator | Sunday 05 April 2026 00:59:39 +0000 (0:00:00.391) 0:10:51.741 ********** 2026-04-05 01:00:32.420386 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.420390 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.420394 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.420398 | orchestrator | 2026-04-05 01:00:32.420402 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-05 01:00:32.420406 | orchestrator | Sunday 05 April 2026 00:59:39 +0000 (0:00:00.558) 0:10:52.300 ********** 2026-04-05 01:00:32.420411 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.420415 | orchestrator | 2026-04-05 01:00:32.420419 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 01:00:32.420423 | orchestrator | Sunday 05 April 2026 00:59:40 +0000 (0:00:00.844) 0:10:53.145 ********** 2026-04-05 01:00:32.420427 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.420431 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:32.420435 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:32.420439 | orchestrator | 2026-04-05 01:00:32.420446 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 01:00:32.420451 | orchestrator | Sunday 05 April 2026 00:59:42 +0000 (0:00:01.831) 0:10:54.976 ********** 2026-04-05 01:00:32.420455 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:00:32.420459 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:32.420463 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.420467 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:00:32.420471 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 01:00:32.420475 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.420479 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:00:32.420483 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 01:00:32.420487 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.420491 | orchestrator | 2026-04-05 01:00:32.420496 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-05 01:00:32.420500 | orchestrator | Sunday 05 April 2026 00:59:43 +0000 (0:00:01.261) 0:10:56.238 ********** 2026-04-05 01:00:32.420504 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420508 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.420512 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.420516 | orchestrator | 2026-04-05 01:00:32.420520 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-05 01:00:32.420524 | orchestrator | Sunday 05 April 2026 00:59:43 +0000 (0:00:00.317) 0:10:56.555 ********** 2026-04-05 01:00:32.420528 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.420532 | orchestrator | 2026-04-05 01:00:32.420536 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-05 01:00:32.420540 | orchestrator | Sunday 05 April 2026 00:59:44 +0000 (0:00:00.842) 0:10:57.398 ********** 2026-04-05 01:00:32.420545 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.420549 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.420553 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.420557 | orchestrator | 2026-04-05 01:00:32.420561 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-05 01:00:32.420565 | orchestrator | Sunday 05 April 2026 00:59:45 +0000 (0:00:00.770) 0:10:58.169 ********** 2026-04-05 01:00:32.420573 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.420577 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 01:00:32.420581 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.420588 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 01:00:32.420593 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.420597 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 01:00:32.420601 | orchestrator | 2026-04-05 01:00:32.420605 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 01:00:32.420609 | orchestrator | Sunday 05 April 2026 00:59:49 +0000 (0:00:03.724) 0:11:01.893 ********** 2026-04-05 01:00:32.420613 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.420617 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:32.420621 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.420625 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:32.420629 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:32.420633 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:32.420637 | orchestrator | 2026-04-05 01:00:32.420641 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 01:00:32.420645 | orchestrator | Sunday 05 April 2026 00:59:51 +0000 (0:00:02.326) 0:11:04.219 ********** 2026-04-05 01:00:32.420650 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:00:32.420654 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.420658 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:00:32.420662 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.420666 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:00:32.420670 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.420674 | orchestrator | 2026-04-05 01:00:32.420678 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-05 01:00:32.420682 | orchestrator | Sunday 05 April 2026 00:59:52 +0000 (0:00:01.269) 0:11:05.489 ********** 2026-04-05 01:00:32.420686 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-05 01:00:32.420691 | orchestrator | 2026-04-05 01:00:32.420695 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-05 01:00:32.420699 | orchestrator | Sunday 05 April 2026 00:59:52 +0000 (0:00:00.224) 0:11:05.714 ********** 2026-04-05 01:00:32.420705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:32.420710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:32.420714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:32.420719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:32.420723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:32.420727 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420731 | orchestrator | 2026-04-05 01:00:32.420735 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-05 01:00:32.420743 | orchestrator | Sunday 05 April 2026 00:59:53 +0000 (0:00:00.642) 0:11:06.356 ********** 2026-04-05 01:00:32.420747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:32.420751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:32.420755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:32.420760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:32.420764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:32.420768 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420772 | orchestrator | 2026-04-05 01:00:32.420776 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-05 01:00:32.420780 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:00.927) 0:11:07.283 ********** 2026-04-05 01:00:32.420784 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 01:00:32.420788 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 01:00:32.420793 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 01:00:32.420800 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 01:00:32.420804 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 01:00:32.420808 | orchestrator | 2026-04-05 01:00:32.420812 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-05 01:00:32.420816 | orchestrator | Sunday 05 April 2026 01:00:16 +0000 (0:00:21.611) 0:11:28.895 ********** 2026-04-05 01:00:32.420820 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420824 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.420829 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.420833 | orchestrator | 2026-04-05 01:00:32.420837 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-05 01:00:32.420841 | orchestrator | Sunday 05 April 2026 01:00:16 +0000 (0:00:00.638) 0:11:29.533 ********** 2026-04-05 01:00:32.420845 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.420849 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.420853 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.420857 | orchestrator | 2026-04-05 01:00:32.420861 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-05 01:00:32.420865 | orchestrator | Sunday 05 April 2026 01:00:17 +0000 (0:00:00.403) 0:11:29.937 ********** 2026-04-05 01:00:32.420869 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.420873 | orchestrator | 2026-04-05 01:00:32.420877 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-05 01:00:32.420882 | orchestrator | Sunday 05 April 2026 01:00:17 +0000 (0:00:00.608) 0:11:30.545 ********** 2026-04-05 01:00:32.420886 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-04-05 01:00:32.420890 | orchestrator | 2026-04-05 01:00:32.420894 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-05 01:00:32.420901 | orchestrator | Sunday 05 April 2026 01:00:18 +0000 (0:00:00.930) 0:11:31.476 ********** 2026-04-05 01:00:32.420905 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.420909 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.420913 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.420917 | orchestrator | 2026-04-05 01:00:32.420921 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-05 01:00:32.420928 | orchestrator | Sunday 05 April 2026 01:00:20 +0000 (0:00:01.622) 0:11:33.099 ********** 2026-04-05 01:00:32.420932 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.420936 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.420940 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.420945 | orchestrator | 2026-04-05 01:00:32.420949 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-05 01:00:32.420953 | orchestrator | Sunday 05 April 2026 01:00:21 +0000 (0:00:01.331) 0:11:34.431 ********** 2026-04-05 01:00:32.420957 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:32.420961 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:32.420965 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:32.420997 | orchestrator | 2026-04-05 01:00:32.421002 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-05 01:00:32.421006 | orchestrator | Sunday 05 April 2026 01:00:24 +0000 (0:00:02.416) 0:11:36.847 ********** 2026-04-05 01:00:32.421010 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.421015 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.421019 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:32.421023 | orchestrator | 2026-04-05 01:00:32.421027 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:32.421031 | orchestrator | Sunday 05 April 2026 01:00:27 +0000 (0:00:02.957) 0:11:39.804 ********** 2026-04-05 01:00:32.421035 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.421039 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.421043 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.421047 | orchestrator | 2026-04-05 01:00:32.421052 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-05 01:00:32.421056 | orchestrator | Sunday 05 April 2026 01:00:27 +0000 (0:00:00.758) 0:11:40.563 ********** 2026-04-05 01:00:32.421060 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:32.421064 | orchestrator | 2026-04-05 01:00:32.421068 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-05 01:00:32.421072 | orchestrator | Sunday 05 April 2026 01:00:28 +0000 (0:00:00.743) 0:11:41.307 ********** 2026-04-05 01:00:32.421076 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.421080 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.421084 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.421088 | orchestrator | 2026-04-05 01:00:32.421092 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-05 01:00:32.421096 | orchestrator | Sunday 05 April 2026 01:00:29 +0000 (0:00:00.563) 0:11:41.870 ********** 2026-04-05 01:00:32.421101 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.421105 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:32.421109 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:32.421113 | orchestrator | 2026-04-05 01:00:32.421117 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-05 01:00:32.421121 | orchestrator | Sunday 05 April 2026 01:00:29 +0000 (0:00:00.447) 0:11:42.318 ********** 2026-04-05 01:00:32.421125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:32.421132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:32.421141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:32.421145 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:32.421149 | orchestrator | 2026-04-05 01:00:32.421153 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-05 01:00:32.421158 | orchestrator | Sunday 05 April 2026 01:00:30 +0000 (0:00:01.307) 0:11:43.625 ********** 2026-04-05 01:00:32.421162 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:32.421166 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:32.421170 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:32.421174 | orchestrator | 2026-04-05 01:00:32.421178 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:00:32.421182 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-05 01:00:32.421187 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-05 01:00:32.421191 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-05 01:00:32.421195 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-05 01:00:32.421199 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-05 01:00:32.421204 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-05 01:00:32.421208 | orchestrator | 2026-04-05 01:00:32.421212 | orchestrator | 2026-04-05 01:00:32.421216 | orchestrator | 2026-04-05 01:00:32.421220 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:00:32.421226 | orchestrator | Sunday 05 April 2026 01:00:31 +0000 (0:00:00.287) 0:11:43.913 ********** 2026-04-05 01:00:32.421232 | orchestrator | =============================================================================== 2026-04-05 01:00:32.421243 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 68.29s 2026-04-05 01:00:32.421250 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.46s 2026-04-05 01:00:32.421257 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 21.61s 2026-04-05 01:00:32.421263 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.49s 2026-04-05 01:00:32.421269 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.65s 2026-04-05 01:00:32.421276 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.05s 2026-04-05 01:00:32.421282 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.82s 2026-04-05 01:00:32.421289 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.27s 2026-04-05 01:00:32.421295 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.56s 2026-04-05 01:00:32.421302 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 6.85s 2026-04-05 01:00:32.421308 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.61s 2026-04-05 01:00:32.421315 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.37s 2026-04-05 01:00:32.421321 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.14s 2026-04-05 01:00:32.421327 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.55s 2026-04-05 01:00:32.421333 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.37s 2026-04-05 01:00:32.421340 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.88s 2026-04-05 01:00:32.421346 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.72s 2026-04-05 01:00:32.421357 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.64s 2026-04-05 01:00:32.421363 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.56s 2026-04-05 01:00:32.421368 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 3.35s 2026-04-05 01:00:32.421374 | orchestrator | 2026-04-05 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:35.426705 | orchestrator | 2026-04-05 01:00:35 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:35.427325 | orchestrator | 2026-04-05 01:00:35 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:00:35.428088 | orchestrator | 2026-04-05 01:00:35 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:35.428119 | orchestrator | 2026-04-05 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:38.471562 | orchestrator | 2026-04-05 01:00:38 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:38.472500 | orchestrator | 2026-04-05 01:00:38 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:00:38.474066 | orchestrator | 2026-04-05 01:00:38 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:38.474120 | orchestrator | 2026-04-05 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:41.505639 | orchestrator | 2026-04-05 01:00:41 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:41.507406 | orchestrator | 2026-04-05 01:00:41 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:00:41.511155 | orchestrator | 2026-04-05 01:00:41 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:41.511738 | orchestrator | 2026-04-05 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:44.561573 | orchestrator | 2026-04-05 01:00:44 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:44.564199 | orchestrator | 2026-04-05 01:00:44 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:00:44.567549 | orchestrator | 2026-04-05 01:00:44 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:44.568105 | orchestrator | 2026-04-05 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:47.615440 | orchestrator | 2026-04-05 01:00:47 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:47.617577 | orchestrator | 2026-04-05 01:00:47 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:00:47.620006 | orchestrator | 2026-04-05 01:00:47 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:47.620295 | orchestrator | 2026-04-05 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:50.664165 | orchestrator | 2026-04-05 01:00:50 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:50.666064 | orchestrator | 2026-04-05 01:00:50 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:00:50.669020 | orchestrator | 2026-04-05 01:00:50 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:50.669069 | orchestrator | 2026-04-05 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:53.712244 | orchestrator | 2026-04-05 01:00:53 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:53.717425 | orchestrator | 2026-04-05 01:00:53 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:00:53.722218 | orchestrator | 2026-04-05 01:00:53 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:53.722284 | orchestrator | 2026-04-05 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:56.770879 | orchestrator | 2026-04-05 01:00:56 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:56.772986 | orchestrator | 2026-04-05 01:00:56 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:00:56.778314 | orchestrator | 2026-04-05 01:00:56 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:56.778400 | orchestrator | 2026-04-05 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:59.833560 | orchestrator | 2026-04-05 01:00:59 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:00:59.834697 | orchestrator | 2026-04-05 01:00:59 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:00:59.836299 | orchestrator | 2026-04-05 01:00:59 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:00:59.836329 | orchestrator | 2026-04-05 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:02.890010 | orchestrator | 2026-04-05 01:01:02 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:02.892685 | orchestrator | 2026-04-05 01:01:02 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:02.894646 | orchestrator | 2026-04-05 01:01:02 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:02.894692 | orchestrator | 2026-04-05 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:05.945144 | orchestrator | 2026-04-05 01:01:05 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:05.947998 | orchestrator | 2026-04-05 01:01:05 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:05.950596 | orchestrator | 2026-04-05 01:01:05 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:05.950654 | orchestrator | 2026-04-05 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:09.004834 | orchestrator | 2026-04-05 01:01:09 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:09.006910 | orchestrator | 2026-04-05 01:01:09 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:09.008862 | orchestrator | 2026-04-05 01:01:09 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:09.008895 | orchestrator | 2026-04-05 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:12.055323 | orchestrator | 2026-04-05 01:01:12 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:12.055433 | orchestrator | 2026-04-05 01:01:12 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:12.056090 | orchestrator | 2026-04-05 01:01:12 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:12.056117 | orchestrator | 2026-04-05 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:15.139397 | orchestrator | 2026-04-05 01:01:15 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:15.141352 | orchestrator | 2026-04-05 01:01:15 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:15.145757 | orchestrator | 2026-04-05 01:01:15 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:15.145805 | orchestrator | 2026-04-05 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:18.194568 | orchestrator | 2026-04-05 01:01:18 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:18.196703 | orchestrator | 2026-04-05 01:01:18 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:18.199581 | orchestrator | 2026-04-05 01:01:18 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:18.199657 | orchestrator | 2026-04-05 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:21.249278 | orchestrator | 2026-04-05 01:01:21 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:21.251718 | orchestrator | 2026-04-05 01:01:21 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:21.253420 | orchestrator | 2026-04-05 01:01:21 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:21.253492 | orchestrator | 2026-04-05 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:24.320592 | orchestrator | 2026-04-05 01:01:24 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:24.323418 | orchestrator | 2026-04-05 01:01:24 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:24.326491 | orchestrator | 2026-04-05 01:01:24 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:24.326554 | orchestrator | 2026-04-05 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:27.391421 | orchestrator | 2026-04-05 01:01:27 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:27.394633 | orchestrator | 2026-04-05 01:01:27 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:27.396731 | orchestrator | 2026-04-05 01:01:27 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:27.396767 | orchestrator | 2026-04-05 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:30.432931 | orchestrator | 2026-04-05 01:01:30 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:30.434512 | orchestrator | 2026-04-05 01:01:30 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:30.440248 | orchestrator | 2026-04-05 01:01:30 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:30.440309 | orchestrator | 2026-04-05 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:33.493421 | orchestrator | 2026-04-05 01:01:33 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:33.496759 | orchestrator | 2026-04-05 01:01:33 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:33.499393 | orchestrator | 2026-04-05 01:01:33 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:33.499453 | orchestrator | 2026-04-05 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:36.550596 | orchestrator | 2026-04-05 01:01:36 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:36.553168 | orchestrator | 2026-04-05 01:01:36 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:36.555208 | orchestrator | 2026-04-05 01:01:36 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:36.555274 | orchestrator | 2026-04-05 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:39.599200 | orchestrator | 2026-04-05 01:01:39 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:39.600098 | orchestrator | 2026-04-05 01:01:39 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:39.601582 | orchestrator | 2026-04-05 01:01:39 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:39.601608 | orchestrator | 2026-04-05 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:42.650360 | orchestrator | 2026-04-05 01:01:42 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:42.652603 | orchestrator | 2026-04-05 01:01:42 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:42.655306 | orchestrator | 2026-04-05 01:01:42 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:42.655388 | orchestrator | 2026-04-05 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:45.704348 | orchestrator | 2026-04-05 01:01:45 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:45.707097 | orchestrator | 2026-04-05 01:01:45 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:45.709876 | orchestrator | 2026-04-05 01:01:45 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:45.709958 | orchestrator | 2026-04-05 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:48.755869 | orchestrator | 2026-04-05 01:01:48 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:48.758439 | orchestrator | 2026-04-05 01:01:48 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:48.761029 | orchestrator | 2026-04-05 01:01:48 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:48.761045 | orchestrator | 2026-04-05 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:51.812055 | orchestrator | 2026-04-05 01:01:51 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:51.815026 | orchestrator | 2026-04-05 01:01:51 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:51.818097 | orchestrator | 2026-04-05 01:01:51 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:51.818162 | orchestrator | 2026-04-05 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:54.860409 | orchestrator | 2026-04-05 01:01:54 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:54.862509 | orchestrator | 2026-04-05 01:01:54 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:54.863995 | orchestrator | 2026-04-05 01:01:54 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:54.864070 | orchestrator | 2026-04-05 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:57.915194 | orchestrator | 2026-04-05 01:01:57 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:01:57.917060 | orchestrator | 2026-04-05 01:01:57 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:01:57.918713 | orchestrator | 2026-04-05 01:01:57 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:01:57.918781 | orchestrator | 2026-04-05 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:00.974334 | orchestrator | 2026-04-05 01:02:00 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:00.976795 | orchestrator | 2026-04-05 01:02:00 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:02:00.978295 | orchestrator | 2026-04-05 01:02:00 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:00.978348 | orchestrator | 2026-04-05 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:04.028356 | orchestrator | 2026-04-05 01:02:04 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:04.029864 | orchestrator | 2026-04-05 01:02:04 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:02:04.031653 | orchestrator | 2026-04-05 01:02:04 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:04.032102 | orchestrator | 2026-04-05 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:07.075544 | orchestrator | 2026-04-05 01:02:07 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:07.079364 | orchestrator | 2026-04-05 01:02:07 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:02:07.083757 | orchestrator | 2026-04-05 01:02:07 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:07.083808 | orchestrator | 2026-04-05 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:10.137296 | orchestrator | 2026-04-05 01:02:10 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:10.139949 | orchestrator | 2026-04-05 01:02:10 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:02:10.142214 | orchestrator | 2026-04-05 01:02:10 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:10.142275 | orchestrator | 2026-04-05 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:13.188722 | orchestrator | 2026-04-05 01:02:13 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:13.191220 | orchestrator | 2026-04-05 01:02:13 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:02:13.193047 | orchestrator | 2026-04-05 01:02:13 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:13.193071 | orchestrator | 2026-04-05 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:16.233997 | orchestrator | 2026-04-05 01:02:16 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:16.236416 | orchestrator | 2026-04-05 01:02:16 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:02:16.237352 | orchestrator | 2026-04-05 01:02:16 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:16.237402 | orchestrator | 2026-04-05 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:19.285360 | orchestrator | 2026-04-05 01:02:19 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:19.287963 | orchestrator | 2026-04-05 01:02:19 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:02:19.291671 | orchestrator | 2026-04-05 01:02:19 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:19.291747 | orchestrator | 2026-04-05 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:22.345396 | orchestrator | 2026-04-05 01:02:22 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:22.348326 | orchestrator | 2026-04-05 01:02:22 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:02:22.349998 | orchestrator | 2026-04-05 01:02:22 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:22.350121 | orchestrator | 2026-04-05 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:25.406409 | orchestrator | 2026-04-05 01:02:25 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:25.407384 | orchestrator | 2026-04-05 01:02:25 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:02:25.408452 | orchestrator | 2026-04-05 01:02:25 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:25.408474 | orchestrator | 2026-04-05 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:28.457807 | orchestrator | 2026-04-05 01:02:28 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:28.460534 | orchestrator | 2026-04-05 01:02:28 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:02:28.462777 | orchestrator | 2026-04-05 01:02:28 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:28.462828 | orchestrator | 2026-04-05 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:31.515488 | orchestrator | 2026-04-05 01:02:31 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:31.517424 | orchestrator | 2026-04-05 01:02:31 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state STARTED 2026-04-05 01:02:31.519843 | orchestrator | 2026-04-05 01:02:31 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:31.520050 | orchestrator | 2026-04-05 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:34.572901 | orchestrator | 2026-04-05 01:02:34 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:02:34.574460 | orchestrator | 2026-04-05 01:02:34 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:34.578244 | orchestrator | 2026-04-05 01:02:34 | INFO  | Task 598604da-24e2-466c-8758-f8d0ea8332b0 is in state SUCCESS 2026-04-05 01:02:34.579918 | orchestrator | 2026-04-05 01:02:34.579982 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 01:02:34.579998 | orchestrator | 2.16.14 2026-04-05 01:02:34.580010 | orchestrator | 2026-04-05 01:02:34.581095 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-05 01:02:34.581141 | orchestrator | 2026-04-05 01:02:34.581153 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 01:02:34.581165 | orchestrator | Sunday 05 April 2026 01:00:36 +0000 (0:00:00.569) 0:00:00.569 ********** 2026-04-05 01:02:34.581176 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:02:34.581188 | orchestrator | 2026-04-05 01:02:34.581199 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 01:02:34.581210 | orchestrator | Sunday 05 April 2026 01:00:37 +0000 (0:00:00.639) 0:00:01.209 ********** 2026-04-05 01:02:34.581221 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.581232 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.581243 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.581254 | orchestrator | 2026-04-05 01:02:34.581265 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 01:02:34.581300 | orchestrator | Sunday 05 April 2026 01:00:38 +0000 (0:00:01.396) 0:00:02.605 ********** 2026-04-05 01:02:34.581311 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.581322 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.581332 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.581343 | orchestrator | 2026-04-05 01:02:34.581354 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 01:02:34.581365 | orchestrator | Sunday 05 April 2026 01:00:38 +0000 (0:00:00.314) 0:00:02.919 ********** 2026-04-05 01:02:34.581375 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.581386 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.581396 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.581407 | orchestrator | 2026-04-05 01:02:34.581419 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 01:02:34.581430 | orchestrator | Sunday 05 April 2026 01:00:39 +0000 (0:00:00.811) 0:00:03.731 ********** 2026-04-05 01:02:34.581440 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.581451 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.581462 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.581471 | orchestrator | 2026-04-05 01:02:34.581481 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 01:02:34.581490 | orchestrator | Sunday 05 April 2026 01:00:39 +0000 (0:00:00.321) 0:00:04.052 ********** 2026-04-05 01:02:34.581499 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.581509 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.581518 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.581527 | orchestrator | 2026-04-05 01:02:34.581537 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 01:02:34.581547 | orchestrator | Sunday 05 April 2026 01:00:40 +0000 (0:00:00.313) 0:00:04.366 ********** 2026-04-05 01:02:34.581556 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.581565 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.581575 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.581584 | orchestrator | 2026-04-05 01:02:34.581594 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 01:02:34.581603 | orchestrator | Sunday 05 April 2026 01:00:40 +0000 (0:00:00.365) 0:00:04.732 ********** 2026-04-05 01:02:34.581613 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.581623 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.581632 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.581642 | orchestrator | 2026-04-05 01:02:34.581651 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 01:02:34.581661 | orchestrator | Sunday 05 April 2026 01:00:41 +0000 (0:00:00.527) 0:00:05.259 ********** 2026-04-05 01:02:34.581670 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.581680 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.581689 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.581701 | orchestrator | 2026-04-05 01:02:34.581713 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 01:02:34.581724 | orchestrator | Sunday 05 April 2026 01:00:41 +0000 (0:00:00.312) 0:00:05.572 ********** 2026-04-05 01:02:34.581736 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:02:34.581747 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:02:34.581758 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:02:34.581771 | orchestrator | 2026-04-05 01:02:34.581792 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 01:02:34.581804 | orchestrator | Sunday 05 April 2026 01:00:42 +0000 (0:00:00.681) 0:00:06.254 ********** 2026-04-05 01:02:34.581816 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.581827 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.581838 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.581849 | orchestrator | 2026-04-05 01:02:34.581886 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 01:02:34.581912 | orchestrator | Sunday 05 April 2026 01:00:42 +0000 (0:00:00.445) 0:00:06.699 ********** 2026-04-05 01:02:34.581923 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:02:34.581935 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:02:34.581946 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:02:34.581957 | orchestrator | 2026-04-05 01:02:34.581969 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 01:02:34.581980 | orchestrator | Sunday 05 April 2026 01:00:45 +0000 (0:00:03.244) 0:00:09.944 ********** 2026-04-05 01:02:34.581993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 01:02:34.582005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 01:02:34.582062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 01:02:34.582076 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.582086 | orchestrator | 2026-04-05 01:02:34.582145 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 01:02:34.582158 | orchestrator | Sunday 05 April 2026 01:00:46 +0000 (0:00:00.422) 0:00:10.367 ********** 2026-04-05 01:02:34.582169 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.582181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.582191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.582201 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.582210 | orchestrator | 2026-04-05 01:02:34.582220 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 01:02:34.582229 | orchestrator | Sunday 05 April 2026 01:00:47 +0000 (0:00:00.853) 0:00:11.220 ********** 2026-04-05 01:02:34.582241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.582253 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.582263 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.582273 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.582283 | orchestrator | 2026-04-05 01:02:34.582292 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 01:02:34.582302 | orchestrator | Sunday 05 April 2026 01:00:47 +0000 (0:00:00.164) 0:00:11.385 ********** 2026-04-05 01:02:34.582327 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '87bcd926dc00', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 01:00:43.673372', 'end': '2026-04-05 01:00:43.717557', 'delta': '0:00:00.044185', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['87bcd926dc00'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 01:02:34.582340 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c005d7d07139', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 01:00:44.801435', 'end': '2026-04-05 01:00:44.845083', 'delta': '0:00:00.043648', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c005d7d07139'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 01:02:34.582380 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'edf2d16fd18e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 01:00:45.660176', 'end': '2026-04-05 01:00:45.703415', 'delta': '0:00:00.043239', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['edf2d16fd18e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 01:02:34.582392 | orchestrator | 2026-04-05 01:02:34.582402 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 01:02:34.582411 | orchestrator | Sunday 05 April 2026 01:00:47 +0000 (0:00:00.416) 0:00:11.802 ********** 2026-04-05 01:02:34.582421 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.582431 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.582440 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.582449 | orchestrator | 2026-04-05 01:02:34.582459 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 01:02:34.582469 | orchestrator | Sunday 05 April 2026 01:00:48 +0000 (0:00:00.560) 0:00:12.362 ********** 2026-04-05 01:02:34.582478 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-05 01:02:34.582488 | orchestrator | 2026-04-05 01:02:34.582498 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 01:02:34.582507 | orchestrator | Sunday 05 April 2026 01:00:49 +0000 (0:00:01.325) 0:00:13.688 ********** 2026-04-05 01:02:34.582517 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.582527 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.582536 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.582546 | orchestrator | 2026-04-05 01:02:34.582555 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 01:02:34.582564 | orchestrator | Sunday 05 April 2026 01:00:49 +0000 (0:00:00.299) 0:00:13.987 ********** 2026-04-05 01:02:34.582574 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.582583 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.582593 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.582603 | orchestrator | 2026-04-05 01:02:34.582619 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 01:02:34.582628 | orchestrator | Sunday 05 April 2026 01:00:50 +0000 (0:00:00.440) 0:00:14.428 ********** 2026-04-05 01:02:34.582638 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.582647 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.582657 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.582666 | orchestrator | 2026-04-05 01:02:34.582675 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 01:02:34.582685 | orchestrator | Sunday 05 April 2026 01:00:50 +0000 (0:00:00.535) 0:00:14.963 ********** 2026-04-05 01:02:34.582695 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.582704 | orchestrator | 2026-04-05 01:02:34.582714 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 01:02:34.582723 | orchestrator | Sunday 05 April 2026 01:00:50 +0000 (0:00:00.110) 0:00:15.074 ********** 2026-04-05 01:02:34.582733 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.582743 | orchestrator | 2026-04-05 01:02:34.582752 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 01:02:34.582762 | orchestrator | Sunday 05 April 2026 01:00:51 +0000 (0:00:00.234) 0:00:15.308 ********** 2026-04-05 01:02:34.582771 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.582781 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.582790 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.582800 | orchestrator | 2026-04-05 01:02:34.582809 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 01:02:34.582819 | orchestrator | Sunday 05 April 2026 01:00:51 +0000 (0:00:00.321) 0:00:15.630 ********** 2026-04-05 01:02:34.582828 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.582838 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.582847 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.582857 | orchestrator | 2026-04-05 01:02:34.582893 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 01:02:34.582903 | orchestrator | Sunday 05 April 2026 01:00:51 +0000 (0:00:00.355) 0:00:15.985 ********** 2026-04-05 01:02:34.582913 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.582923 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.582932 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.582942 | orchestrator | 2026-04-05 01:02:34.582951 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 01:02:34.582961 | orchestrator | Sunday 05 April 2026 01:00:52 +0000 (0:00:00.612) 0:00:16.598 ********** 2026-04-05 01:02:34.582970 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.582980 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.582990 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.582999 | orchestrator | 2026-04-05 01:02:34.583008 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 01:02:34.583018 | orchestrator | Sunday 05 April 2026 01:00:52 +0000 (0:00:00.339) 0:00:16.938 ********** 2026-04-05 01:02:34.583028 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.583037 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.583047 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.583057 | orchestrator | 2026-04-05 01:02:34.583066 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 01:02:34.583076 | orchestrator | Sunday 05 April 2026 01:00:53 +0000 (0:00:00.333) 0:00:17.271 ********** 2026-04-05 01:02:34.583085 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.583095 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.583104 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.583143 | orchestrator | 2026-04-05 01:02:34.583155 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 01:02:34.583164 | orchestrator | Sunday 05 April 2026 01:00:53 +0000 (0:00:00.363) 0:00:17.635 ********** 2026-04-05 01:02:34.583174 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.583191 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.583201 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.583210 | orchestrator | 2026-04-05 01:02:34.583223 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 01:02:34.583240 | orchestrator | Sunday 05 April 2026 01:00:54 +0000 (0:00:00.562) 0:00:18.197 ********** 2026-04-05 01:02:34.583262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd7e6aba--230a--5307--afd3--3b474950d4d0-osd--block--bd7e6aba--230a--5307--afd3--3b474950d4d0', 'dm-uuid-LVM-m1QlHxCsbxztU2FuOybrbqS7CBCT7wjEoNVQSmaG9N9pwN9NxAX2gf2DoZoQBSLW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffa9e237--b4c6--554d--9530--d8db42979c07-osd--block--ffa9e237--b4c6--554d--9530--d8db42979c07', 'dm-uuid-LVM-MPhbeREO53p8Jlrygb16JZjJdslDbKe9UFAHKtvpsM3Td0r3FZzHgndlgeccqD31'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part1', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part14', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part15', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part16', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.583547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bd7e6aba--230a--5307--afd3--3b474950d4d0-osd--block--bd7e6aba--230a--5307--afd3--3b474950d4d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g3afkn-oiK6-fbGy-ikDI-QGrc-Ke5t-Vng8th', 'scsi-0QEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772', 'scsi-SQEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.583624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ffa9e237--b4c6--554d--9530--d8db42979c07-osd--block--ffa9e237--b4c6--554d--9530--d8db42979c07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oEjU1j-7vo1-FBbZ-xQfu-XNze-tfrU-fzo2Hf', 'scsi-0QEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064', 'scsi-SQEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.583645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c330a934--8550--546d--8551--a9ce4f4a4f0f-osd--block--c330a934--8550--546d--8551--a9ce4f4a4f0f', 'dm-uuid-LVM-M5GW0XsaZBYOdi3LjwKFnXxM7dHZGYisyuj76tYmxE1IOZqmeCabtxDaQl51AQiT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e', 'scsi-SQEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.583668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--824ea9fd--8e44--5b08--9075--8333765a455e-osd--block--824ea9fd--8e44--5b08--9075--8333765a455e', 'dm-uuid-LVM-YQsQAY86Fx4ju4TNq2gKTp2qhyUkpD30NpWlR2Lj975POoLLl6xqkUcdSwvKaup1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.583705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583727 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.583767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.583881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3bb92c70--c222--5380--a7bf--d21f250fcd2a-osd--block--3bb92c70--c222--5380--a7bf--d21f250fcd2a', 'dm-uuid-LVM-Iwi0qyKjiGmMF5ursl1dLgDY0DpsldIbWEqgh6AVunI3t2Bgz9ffIVamVaOiYcdC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c330a934--8550--546d--8551--a9ce4f4a4f0f-osd--block--c330a934--8550--546d--8551--a9ce4f4a4f0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IvGrtq-91Hy-Ua6w-dSHl-JVgq-dNiF-ZDVSPO', 'scsi-0QEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654', 'scsi-SQEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.583903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--377d1900--3c05--5c55--820b--3d4ba19b512c-osd--block--377d1900--3c05--5c55--820b--3d4ba19b512c', 'dm-uuid-LVM-KOpPIgP3YZPgrR5U1Alrp0YgUL65ze1aGCE4YLXLcRuVkn0cprnjm94w3OsBdDWy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--824ea9fd--8e44--5b08--9075--8333765a455e-osd--block--824ea9fd--8e44--5b08--9075--8333765a455e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3VLkW2-HYKO-b9sH-FgVc-eGYL-BmyQ-VG6oGC', 'scsi-0QEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0', 'scsi-SQEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.583929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637', 'scsi-SQEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.583963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.583973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.583983 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.583994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.584004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.584014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.584024 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.584038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.584054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:34.584071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part1', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part14', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part15', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part16', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.584083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3bb92c70--c222--5380--a7bf--d21f250fcd2a-osd--block--3bb92c70--c222--5380--a7bf--d21f250fcd2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m5POgL-rOBp-YXYX-f3KV-nJ3H-4ca2-4TuzW5', 'scsi-0QEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2', 'scsi-SQEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.584098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--377d1900--3c05--5c55--820b--3d4ba19b512c-osd--block--377d1900--3c05--5c55--820b--3d4ba19b512c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1de6Ye-L2s7-EBhG-a0LS-PRvj-HatI-TsRBgx', 'scsi-0QEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba', 'scsi-SQEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.584114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2', 'scsi-SQEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.584130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:34.584141 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.584150 | orchestrator | 2026-04-05 01:02:34.584160 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 01:02:34.584170 | orchestrator | Sunday 05 April 2026 01:00:54 +0000 (0:00:00.648) 0:00:18.846 ********** 2026-04-05 01:02:34.584180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd7e6aba--230a--5307--afd3--3b474950d4d0-osd--block--bd7e6aba--230a--5307--afd3--3b474950d4d0', 'dm-uuid-LVM-m1QlHxCsbxztU2FuOybrbqS7CBCT7wjEoNVQSmaG9N9pwN9NxAX2gf2DoZoQBSLW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffa9e237--b4c6--554d--9530--d8db42979c07-osd--block--ffa9e237--b4c6--554d--9530--d8db42979c07', 'dm-uuid-LVM-MPhbeREO53p8Jlrygb16JZjJdslDbKe9UFAHKtvpsM3Td0r3FZzHgndlgeccqD31'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584201 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584230 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584257 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584267 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584277 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584307 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c330a934--8550--546d--8551--a9ce4f4a4f0f-osd--block--c330a934--8550--546d--8551--a9ce4f4a4f0f', 'dm-uuid-LVM-M5GW0XsaZBYOdi3LjwKFnXxM7dHZGYisyuj76tYmxE1IOZqmeCabtxDaQl51AQiT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584323 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--824ea9fd--8e44--5b08--9075--8333765a455e-osd--block--824ea9fd--8e44--5b08--9075--8333765a455e', 'dm-uuid-LVM-YQsQAY86Fx4ju4TNq2gKTp2qhyUkpD30NpWlR2Lj975POoLLl6xqkUcdSwvKaup1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584334 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part1', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part14', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part15', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part16', 'scsi-SQEMU_QEMU_HARDDISK_e1b8773f-e2de-400e-b2a6-6c6ae68fe2f5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584355 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584371 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bd7e6aba--230a--5307--afd3--3b474950d4d0-osd--block--bd7e6aba--230a--5307--afd3--3b474950d4d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g3afkn-oiK6-fbGy-ikDI-QGrc-Ke5t-Vng8th', 'scsi-0QEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772', 'scsi-SQEMU_QEMU_HARDDISK_caeb3c42-c4b8-40bd-8e18-9e72dc321772'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584381 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ffa9e237--b4c6--554d--9530--d8db42979c07-osd--block--ffa9e237--b4c6--554d--9530--d8db42979c07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oEjU1j-7vo1-FBbZ-xQfu-XNze-tfrU-fzo2Hf', 'scsi-0QEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064', 'scsi-SQEMU_QEMU_HARDDISK_62ed18a5-03b2-4cb7-a868-d43e6cb85064'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584402 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e', 'scsi-SQEMU_QEMU_HARDDISK_831c674b-a7a8-4a18-9cfe-2b7acfd18a4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584424 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584434 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.584444 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584471 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584491 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584528 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_37f0d12f-2bb4-42f9-a6b7-b33c691698f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584541 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c330a934--8550--546d--8551--a9ce4f4a4f0f-osd--block--c330a934--8550--546d--8551--a9ce4f4a4f0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IvGrtq-91Hy-Ua6w-dSHl-JVgq-dNiF-ZDVSPO', 'scsi-0QEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654', 'scsi-SQEMU_QEMU_HARDDISK_dde5ff38-a1e5-4746-bab1-211109e78654'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--824ea9fd--8e44--5b08--9075--8333765a455e-osd--block--824ea9fd--8e44--5b08--9075--8333765a455e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3VLkW2-HYKO-b9sH-FgVc-eGYL-BmyQ-VG6oGC', 'scsi-0QEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0', 'scsi-SQEMU_QEMU_HARDDISK_4c017526-66b5-4804-9f5d-05d3d9a7b1e0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584582 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637', 'scsi-SQEMU_QEMU_HARDDISK_26a11086-b273-42dd-aa8f-9644b133a637'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584599 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584609 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3bb92c70--c222--5380--a7bf--d21f250fcd2a-osd--block--3bb92c70--c222--5380--a7bf--d21f250fcd2a', 'dm-uuid-LVM-Iwi0qyKjiGmMF5ursl1dLgDY0DpsldIbWEqgh6AVunI3t2Bgz9ffIVamVaOiYcdC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584619 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.584629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--377d1900--3c05--5c55--820b--3d4ba19b512c-osd--block--377d1900--3c05--5c55--820b--3d4ba19b512c', 'dm-uuid-LVM-KOpPIgP3YZPgrR5U1Alrp0YgUL65ze1aGCE4YLXLcRuVkn0cprnjm94w3OsBdDWy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584645 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584659 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584670 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584695 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584705 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584760 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part1', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part14', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part15', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part16', 'scsi-SQEMU_QEMU_HARDDISK_d10d19df-84d5-4f9c-9dff-ab89b235cba9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584772 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3bb92c70--c222--5380--a7bf--d21f250fcd2a-osd--block--3bb92c70--c222--5380--a7bf--d21f250fcd2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m5POgL-rOBp-YXYX-f3KV-nJ3H-4ca2-4TuzW5', 'scsi-0QEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2', 'scsi-SQEMU_QEMU_HARDDISK_a543ca24-8ce5-4d4d-a7ab-f0db2d7f7bb2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584789 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--377d1900--3c05--5c55--820b--3d4ba19b512c-osd--block--377d1900--3c05--5c55--820b--3d4ba19b512c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1de6Ye-L2s7-EBhG-a0LS-PRvj-HatI-TsRBgx', 'scsi-0QEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba', 'scsi-SQEMU_QEMU_HARDDISK_e02e3eed-6f8b-4cff-9a7e-0f14751ef6ba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584804 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2', 'scsi-SQEMU_QEMU_HARDDISK_160e21cb-7f36-4211-96c7-9609d25dd0e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584820 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:34.584830 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.584840 | orchestrator | 2026-04-05 01:02:34.584849 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 01:02:34.584871 | orchestrator | Sunday 05 April 2026 01:00:55 +0000 (0:00:00.820) 0:00:19.666 ********** 2026-04-05 01:02:34.584882 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.584892 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.584901 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.584911 | orchestrator | 2026-04-05 01:02:34.584920 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 01:02:34.584930 | orchestrator | Sunday 05 April 2026 01:00:56 +0000 (0:00:00.683) 0:00:20.350 ********** 2026-04-05 01:02:34.584940 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.584949 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.584959 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.584975 | orchestrator | 2026-04-05 01:02:34.584985 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 01:02:34.584995 | orchestrator | Sunday 05 April 2026 01:00:56 +0000 (0:00:00.521) 0:00:20.871 ********** 2026-04-05 01:02:34.585004 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.585014 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.585023 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.585033 | orchestrator | 2026-04-05 01:02:34.585043 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 01:02:34.585052 | orchestrator | Sunday 05 April 2026 01:00:57 +0000 (0:00:00.721) 0:00:21.592 ********** 2026-04-05 01:02:34.585062 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.585072 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.585081 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.585091 | orchestrator | 2026-04-05 01:02:34.585100 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 01:02:34.585110 | orchestrator | Sunday 05 April 2026 01:00:57 +0000 (0:00:00.304) 0:00:21.897 ********** 2026-04-05 01:02:34.585120 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.585129 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.585139 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.585148 | orchestrator | 2026-04-05 01:02:34.585158 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 01:02:34.585167 | orchestrator | Sunday 05 April 2026 01:00:58 +0000 (0:00:00.447) 0:00:22.344 ********** 2026-04-05 01:02:34.585177 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.585187 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.585196 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.585206 | orchestrator | 2026-04-05 01:02:34.585215 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 01:02:34.585225 | orchestrator | Sunday 05 April 2026 01:00:58 +0000 (0:00:00.578) 0:00:22.922 ********** 2026-04-05 01:02:34.585235 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 01:02:34.585245 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 01:02:34.585254 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 01:02:34.585264 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 01:02:34.585273 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 01:02:34.585283 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 01:02:34.585293 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 01:02:34.585302 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 01:02:34.585312 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 01:02:34.585321 | orchestrator | 2026-04-05 01:02:34.585332 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 01:02:34.585341 | orchestrator | Sunday 05 April 2026 01:00:59 +0000 (0:00:00.973) 0:00:23.896 ********** 2026-04-05 01:02:34.585350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 01:02:34.585360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 01:02:34.585375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 01:02:34.585385 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.585395 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 01:02:34.585404 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 01:02:34.585414 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 01:02:34.585423 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.585433 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 01:02:34.585443 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 01:02:34.585452 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 01:02:34.585462 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.585477 | orchestrator | 2026-04-05 01:02:34.585487 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 01:02:34.585496 | orchestrator | Sunday 05 April 2026 01:01:00 +0000 (0:00:00.363) 0:00:24.260 ********** 2026-04-05 01:02:34.585506 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:02:34.585516 | orchestrator | 2026-04-05 01:02:34.585526 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 01:02:34.585536 | orchestrator | Sunday 05 April 2026 01:01:00 +0000 (0:00:00.784) 0:00:25.045 ********** 2026-04-05 01:02:34.585552 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.585562 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.585571 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.585581 | orchestrator | 2026-04-05 01:02:34.585590 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 01:02:34.585600 | orchestrator | Sunday 05 April 2026 01:01:01 +0000 (0:00:00.353) 0:00:25.398 ********** 2026-04-05 01:02:34.585609 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.585619 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.585629 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.585638 | orchestrator | 2026-04-05 01:02:34.585648 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 01:02:34.585657 | orchestrator | Sunday 05 April 2026 01:01:01 +0000 (0:00:00.320) 0:00:25.718 ********** 2026-04-05 01:02:34.585667 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.585676 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.585686 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:34.585696 | orchestrator | 2026-04-05 01:02:34.585705 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 01:02:34.585715 | orchestrator | Sunday 05 April 2026 01:01:02 +0000 (0:00:00.398) 0:00:26.117 ********** 2026-04-05 01:02:34.585725 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.585734 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.585744 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.585754 | orchestrator | 2026-04-05 01:02:34.585764 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 01:02:34.585773 | orchestrator | Sunday 05 April 2026 01:01:02 +0000 (0:00:00.652) 0:00:26.770 ********** 2026-04-05 01:02:34.585782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:02:34.585792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:02:34.585801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:02:34.585811 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.585820 | orchestrator | 2026-04-05 01:02:34.585830 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 01:02:34.585840 | orchestrator | Sunday 05 April 2026 01:01:03 +0000 (0:00:00.389) 0:00:27.159 ********** 2026-04-05 01:02:34.585849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:02:34.585938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:02:34.585954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:02:34.585964 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.585974 | orchestrator | 2026-04-05 01:02:34.585984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 01:02:34.585993 | orchestrator | Sunday 05 April 2026 01:01:03 +0000 (0:00:00.383) 0:00:27.543 ********** 2026-04-05 01:02:34.586003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:02:34.586013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:02:34.586064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:02:34.586074 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.586084 | orchestrator | 2026-04-05 01:02:34.586094 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 01:02:34.586114 | orchestrator | Sunday 05 April 2026 01:01:03 +0000 (0:00:00.393) 0:00:27.936 ********** 2026-04-05 01:02:34.586124 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:34.586134 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:34.586143 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:34.586153 | orchestrator | 2026-04-05 01:02:34.586162 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 01:02:34.586170 | orchestrator | Sunday 05 April 2026 01:01:04 +0000 (0:00:00.354) 0:00:28.291 ********** 2026-04-05 01:02:34.586178 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 01:02:34.586186 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 01:02:34.586194 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 01:02:34.586201 | orchestrator | 2026-04-05 01:02:34.586209 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 01:02:34.586217 | orchestrator | Sunday 05 April 2026 01:01:04 +0000 (0:00:00.579) 0:00:28.870 ********** 2026-04-05 01:02:34.586226 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:02:34.586234 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:02:34.586246 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:02:34.586254 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 01:02:34.586262 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 01:02:34.586270 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 01:02:34.586278 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 01:02:34.586286 | orchestrator | 2026-04-05 01:02:34.586294 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 01:02:34.586302 | orchestrator | Sunday 05 April 2026 01:01:05 +0000 (0:00:01.060) 0:00:29.930 ********** 2026-04-05 01:02:34.586310 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:02:34.586318 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:02:34.586326 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:02:34.586333 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 01:02:34.586341 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 01:02:34.586349 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 01:02:34.586364 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 01:02:34.586372 | orchestrator | 2026-04-05 01:02:34.586380 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-05 01:02:34.586388 | orchestrator | Sunday 05 April 2026 01:01:07 +0000 (0:00:02.144) 0:00:32.075 ********** 2026-04-05 01:02:34.586396 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:34.586403 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:34.586411 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-05 01:02:34.586419 | orchestrator | 2026-04-05 01:02:34.586427 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-05 01:02:34.586435 | orchestrator | Sunday 05 April 2026 01:01:08 +0000 (0:00:00.389) 0:00:32.464 ********** 2026-04-05 01:02:34.586443 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:02:34.586452 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:02:34.586465 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:02:34.586473 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:02:34.586481 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:02:34.586489 | orchestrator | 2026-04-05 01:02:34.586497 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-05 01:02:34.586505 | orchestrator | Sunday 05 April 2026 01:01:45 +0000 (0:00:36.970) 0:01:09.435 ********** 2026-04-05 01:02:34.586513 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586521 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586528 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586536 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586544 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586552 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586560 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-05 01:02:34.586568 | orchestrator | 2026-04-05 01:02:34.586575 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-05 01:02:34.586584 | orchestrator | Sunday 05 April 2026 01:02:03 +0000 (0:00:18.449) 0:01:27.885 ********** 2026-04-05 01:02:34.586595 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586602 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586610 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586618 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586653 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586662 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586670 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:02:34.586678 | orchestrator | 2026-04-05 01:02:34.586686 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-05 01:02:34.586694 | orchestrator | Sunday 05 April 2026 01:02:13 +0000 (0:00:09.502) 0:01:37.387 ********** 2026-04-05 01:02:34.586702 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586710 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:34.586718 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:34.586726 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586734 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:34.586753 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:34.586762 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586770 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:34.586778 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:34.586786 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586794 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:34.586801 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:34.586809 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586817 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:34.586824 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:34.586832 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:34.586840 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:34.586848 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:34.586856 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-05 01:02:34.586877 | orchestrator | 2026-04-05 01:02:34.586886 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:02:34.586894 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-05 01:02:34.586903 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-05 01:02:34.586911 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-05 01:02:34.586919 | orchestrator | 2026-04-05 01:02:34.586927 | orchestrator | 2026-04-05 01:02:34.586935 | orchestrator | 2026-04-05 01:02:34.586943 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:02:34.586950 | orchestrator | Sunday 05 April 2026 01:02:31 +0000 (0:00:18.095) 0:01:55.482 ********** 2026-04-05 01:02:34.586958 | orchestrator | =============================================================================== 2026-04-05 01:02:34.586966 | orchestrator | create openstack pool(s) ----------------------------------------------- 36.97s 2026-04-05 01:02:34.586974 | orchestrator | generate keys ---------------------------------------------------------- 18.45s 2026-04-05 01:02:34.586982 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.10s 2026-04-05 01:02:34.586990 | orchestrator | get keys from monitors -------------------------------------------------- 9.50s 2026-04-05 01:02:34.586998 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.24s 2026-04-05 01:02:34.587005 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.14s 2026-04-05 01:02:34.587013 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 1.40s 2026-04-05 01:02:34.587021 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.33s 2026-04-05 01:02:34.587029 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.06s 2026-04-05 01:02:34.587037 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.97s 2026-04-05 01:02:34.587044 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.85s 2026-04-05 01:02:34.587052 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.82s 2026-04-05 01:02:34.587060 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2026-04-05 01:02:34.587077 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.79s 2026-04-05 01:02:34.587085 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.72s 2026-04-05 01:02:34.587093 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.68s 2026-04-05 01:02:34.587121 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2026-04-05 01:02:34.587130 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.65s 2026-04-05 01:02:34.587138 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.65s 2026-04-05 01:02:34.587146 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2026-04-05 01:02:34.587154 | orchestrator | 2026-04-05 01:02:34 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:34.587162 | orchestrator | 2026-04-05 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:37.638470 | orchestrator | 2026-04-05 01:02:37 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:02:37.640463 | orchestrator | 2026-04-05 01:02:37 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:37.642644 | orchestrator | 2026-04-05 01:02:37 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:37.642690 | orchestrator | 2026-04-05 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:40.700617 | orchestrator | 2026-04-05 01:02:40 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:02:40.703392 | orchestrator | 2026-04-05 01:02:40 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:40.706948 | orchestrator | 2026-04-05 01:02:40 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:40.707037 | orchestrator | 2026-04-05 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:43.759228 | orchestrator | 2026-04-05 01:02:43 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:02:43.761940 | orchestrator | 2026-04-05 01:02:43 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state STARTED 2026-04-05 01:02:43.765101 | orchestrator | 2026-04-05 01:02:43 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:43.767213 | orchestrator | 2026-04-05 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:46.829139 | orchestrator | 2026-04-05 01:02:46 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:02:46.830828 | orchestrator | 2026-04-05 01:02:46 | INFO  | Task 75d9bffd-d631-4038-bb63-abf7ee08b119 is in state SUCCESS 2026-04-05 01:02:46.832558 | orchestrator | 2026-04-05 01:02:46.832633 | orchestrator | 2026-04-05 01:02:46.832647 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:02:46.832659 | orchestrator | 2026-04-05 01:02:46.832668 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:02:46.832677 | orchestrator | Sunday 05 April 2026 00:59:51 +0000 (0:00:00.396) 0:00:00.396 ********** 2026-04-05 01:02:46.832687 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:46.832697 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:02:46.832705 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:02:46.832714 | orchestrator | 2026-04-05 01:02:46.832723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:02:46.832732 | orchestrator | Sunday 05 April 2026 00:59:51 +0000 (0:00:00.327) 0:00:00.724 ********** 2026-04-05 01:02:46.832741 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-05 01:02:46.832751 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-05 01:02:46.833107 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-05 01:02:46.833145 | orchestrator | 2026-04-05 01:02:46.833155 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-05 01:02:46.833164 | orchestrator | 2026-04-05 01:02:46.833173 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 01:02:46.833182 | orchestrator | Sunday 05 April 2026 00:59:51 +0000 (0:00:00.307) 0:00:01.032 ********** 2026-04-05 01:02:46.833191 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:02:46.833200 | orchestrator | 2026-04-05 01:02:46.833209 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-05 01:02:46.833218 | orchestrator | Sunday 05 April 2026 00:59:52 +0000 (0:00:00.662) 0:00:01.694 ********** 2026-04-05 01:02:46.833227 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 01:02:46.833236 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 01:02:46.833244 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 01:02:46.833253 | orchestrator | 2026-04-05 01:02:46.833262 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-05 01:02:46.833270 | orchestrator | Sunday 05 April 2026 00:59:53 +0000 (0:00:01.051) 0:00:02.745 ********** 2026-04-05 01:02:46.833294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.833307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.833330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.833351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.833369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.833380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.833389 | orchestrator | 2026-04-05 01:02:46.833399 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 01:02:46.833408 | orchestrator | Sunday 05 April 2026 00:59:55 +0000 (0:00:01.471) 0:00:04.216 ********** 2026-04-05 01:02:46.833417 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:02:46.833431 | orchestrator | 2026-04-05 01:02:46.833440 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-05 01:02:46.833456 | orchestrator | Sunday 05 April 2026 00:59:55 +0000 (0:00:00.513) 0:00:04.729 ********** 2026-04-05 01:02:46.833466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.833477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.833526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.833537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.833562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.833577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.833587 | orchestrator | 2026-04-05 01:02:46.833596 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-05 01:02:46.833605 | orchestrator | Sunday 05 April 2026 00:59:58 +0000 (0:00:02.820) 0:00:07.550 ********** 2026-04-05 01:02:46.833615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:02:46.833630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 01:02:46.833646 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:46.833656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:02:46.833671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 01:02:46.833681 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:46.833690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:02:46.833707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 01:02:46.833724 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:46.833735 | orchestrator | 2026-04-05 01:02:46.833745 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-05 01:02:46.833756 | orchestrator | Sunday 05 April 2026 00:59:59 +0000 (0:00:01.299) 0:00:08.850 ********** 2026-04-05 01:02:46.833784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:02:46.833810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 01:02:46.833823 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:46.833834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:02:46.833891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:02:46.833905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 01:02:46.833917 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:46.833933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 01:02:46.833945 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:46.833955 | orchestrator | 2026-04-05 01:02:46.833966 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-05 01:02:46.833977 | orchestrator | Sunday 05 April 2026 01:00:00 +0000 (0:00:01.182) 0:00:10.032 ********** 2026-04-05 01:02:46.833989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.834098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.834114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.834128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.834139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.834164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.834175 | orchestrator | 2026-04-05 01:02:46.834184 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-05 01:02:46.834193 | orchestrator | Sunday 05 April 2026 01:00:03 +0000 (0:00:02.709) 0:00:12.742 ********** 2026-04-05 01:02:46.834202 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:46.834211 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:02:46.834220 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:02:46.834229 | orchestrator | 2026-04-05 01:02:46.834237 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-05 01:02:46.834246 | orchestrator | Sunday 05 April 2026 01:00:06 +0000 (0:00:02.632) 0:00:15.374 ********** 2026-04-05 01:02:46.834255 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:02:46.834263 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:46.834272 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:02:46.834284 | orchestrator | 2026-04-05 01:02:46.834299 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-05 01:02:46.834315 | orchestrator | Sunday 05 April 2026 01:00:07 +0000 (0:00:01.662) 0:00:17.036 ********** 2026-04-05 01:02:46.834337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.834357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.834367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:02:46.834383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.834398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.834413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 01:02:46.834423 | orchestrator | 2026-04-05 01:02:46.834432 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-05 01:02:46.834441 | orchestrator | Sunday 05 April 2026 01:00:10 +0000 (0:00:02.248) 0:00:19.284 ********** 2026-04-05 01:02:46.834450 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:02:46.834459 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:02:46.834468 | orchestrator | } 2026-04-05 01:02:46.834477 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:02:46.834485 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:02:46.834494 | orchestrator | } 2026-04-05 01:02:46.834503 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:02:46.834511 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:02:46.834520 | orchestrator | } 2026-04-05 01:02:46.834529 | orchestrator | 2026-04-05 01:02:46.834537 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:02:46.834550 | orchestrator | Sunday 05 April 2026 01:00:10 +0000 (0:00:00.447) 0:00:19.732 ********** 2026-04-05 01:02:46.834560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:02:46.834574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 01:02:46.834590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:02:46.834605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 01:02:46.834615 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:46.834624 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:46.834633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:02:46.834647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 01:02:46.834663 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:46.834672 | orchestrator | 2026-04-05 01:02:46.834680 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 01:02:46.834689 | orchestrator | Sunday 05 April 2026 01:00:11 +0000 (0:00:00.758) 0:00:20.490 ********** 2026-04-05 01:02:46.834698 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:46.834706 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:46.834715 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:46.834723 | orchestrator | 2026-04-05 01:02:46.834732 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 01:02:46.834741 | orchestrator | Sunday 05 April 2026 01:00:11 +0000 (0:00:00.436) 0:00:20.926 ********** 2026-04-05 01:02:46.834749 | orchestrator | 2026-04-05 01:02:46.834758 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 01:02:46.834767 | orchestrator | Sunday 05 April 2026 01:00:11 +0000 (0:00:00.082) 0:00:21.008 ********** 2026-04-05 01:02:46.834776 | orchestrator | 2026-04-05 01:02:46.834784 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 01:02:46.834793 | orchestrator | Sunday 05 April 2026 01:00:12 +0000 (0:00:00.074) 0:00:21.083 ********** 2026-04-05 01:02:46.834802 | orchestrator | 2026-04-05 01:02:46.834810 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-05 01:02:46.834819 | orchestrator | Sunday 05 April 2026 01:00:12 +0000 (0:00:00.202) 0:00:21.286 ********** 2026-04-05 01:02:46.834827 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:46.834836 | orchestrator | 2026-04-05 01:02:46.834845 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-05 01:02:46.834873 | orchestrator | Sunday 05 April 2026 01:00:12 +0000 (0:00:00.181) 0:00:21.467 ********** 2026-04-05 01:02:46.834883 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:46.834891 | orchestrator | 2026-04-05 01:02:46.834900 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-05 01:02:46.834909 | orchestrator | Sunday 05 April 2026 01:00:12 +0000 (0:00:00.186) 0:00:21.653 ********** 2026-04-05 01:02:46.834918 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:46.834926 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:02:46.834935 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:02:46.834944 | orchestrator | 2026-04-05 01:02:46.834953 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-05 01:02:46.834962 | orchestrator | Sunday 05 April 2026 01:01:10 +0000 (0:00:57.468) 0:01:19.122 ********** 2026-04-05 01:02:46.834971 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:46.834980 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:02:46.834989 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:02:46.834997 | orchestrator | 2026-04-05 01:02:46.835006 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 01:02:46.835015 | orchestrator | Sunday 05 April 2026 01:02:30 +0000 (0:01:20.566) 0:02:39.689 ********** 2026-04-05 01:02:46.835029 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:02:46.835039 | orchestrator | 2026-04-05 01:02:46.835047 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-05 01:02:46.835056 | orchestrator | Sunday 05 April 2026 01:02:31 +0000 (0:00:00.708) 0:02:40.397 ********** 2026-04-05 01:02:46.835070 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:46.835079 | orchestrator | 2026-04-05 01:02:46.835088 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-05 01:02:46.835097 | orchestrator | Sunday 05 April 2026 01:02:33 +0000 (0:00:02.516) 0:02:42.914 ********** 2026-04-05 01:02:46.835106 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:46.835115 | orchestrator | 2026-04-05 01:02:46.835123 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-05 01:02:46.835132 | orchestrator | Sunday 05 April 2026 01:02:36 +0000 (0:00:02.164) 0:02:45.079 ********** 2026-04-05 01:02:46.835141 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:46.835149 | orchestrator | 2026-04-05 01:02:46.835158 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-05 01:02:46.835167 | orchestrator | Sunday 05 April 2026 01:02:38 +0000 (0:00:02.493) 0:02:47.572 ********** 2026-04-05 01:02:46.835176 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:46.835184 | orchestrator | 2026-04-05 01:02:46.835194 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-05 01:02:46.835203 | orchestrator | Sunday 05 April 2026 01:02:41 +0000 (0:00:02.840) 0:02:50.412 ********** 2026-04-05 01:02:46.835212 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:46.835221 | orchestrator | 2026-04-05 01:02:46.835229 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:02:46.835239 | orchestrator | testbed-node-0 : ok=20  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 01:02:46.835248 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 01:02:46.835257 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 01:02:46.835265 | orchestrator | 2026-04-05 01:02:46.835274 | orchestrator | 2026-04-05 01:02:46.835288 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:02:46.835297 | orchestrator | Sunday 05 April 2026 01:02:43 +0000 (0:00:02.388) 0:02:52.801 ********** 2026-04-05 01:02:46.835305 | orchestrator | =============================================================================== 2026-04-05 01:02:46.835314 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 80.57s 2026-04-05 01:02:46.835323 | orchestrator | opensearch : Restart opensearch container ------------------------------ 57.47s 2026-04-05 01:02:46.835331 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.84s 2026-04-05 01:02:46.835340 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.82s 2026-04-05 01:02:46.835349 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.71s 2026-04-05 01:02:46.835357 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.63s 2026-04-05 01:02:46.835366 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.52s 2026-04-05 01:02:46.835375 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.49s 2026-04-05 01:02:46.835383 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.39s 2026-04-05 01:02:46.835392 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.25s 2026-04-05 01:02:46.835401 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.16s 2026-04-05 01:02:46.835409 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.66s 2026-04-05 01:02:46.835418 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.47s 2026-04-05 01:02:46.835426 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.30s 2026-04-05 01:02:46.835435 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.18s 2026-04-05 01:02:46.835452 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.05s 2026-04-05 01:02:46.835461 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.76s 2026-04-05 01:02:46.835470 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2026-04-05 01:02:46.835478 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.66s 2026-04-05 01:02:46.835487 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-04-05 01:02:46.835496 | orchestrator | 2026-04-05 01:02:46 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:46.835505 | orchestrator | 2026-04-05 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:49.883248 | orchestrator | 2026-04-05 01:02:49 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:02:49.884994 | orchestrator | 2026-04-05 01:02:49 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:49.885074 | orchestrator | 2026-04-05 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:52.932439 | orchestrator | 2026-04-05 01:02:52 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:02:52.932519 | orchestrator | 2026-04-05 01:02:52 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:52.932530 | orchestrator | 2026-04-05 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:55.989344 | orchestrator | 2026-04-05 01:02:55 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:02:55.992249 | orchestrator | 2026-04-05 01:02:55 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:55.992422 | orchestrator | 2026-04-05 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:59.054338 | orchestrator | 2026-04-05 01:02:59 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:02:59.057193 | orchestrator | 2026-04-05 01:02:59 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:02:59.057267 | orchestrator | 2026-04-05 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:02.108923 | orchestrator | 2026-04-05 01:03:02 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:03:02.111426 | orchestrator | 2026-04-05 01:03:02 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:03:02.111523 | orchestrator | 2026-04-05 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:05.174594 | orchestrator | 2026-04-05 01:03:05 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:03:05.174741 | orchestrator | 2026-04-05 01:03:05 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:03:05.177894 | orchestrator | 2026-04-05 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:08.210911 | orchestrator | 2026-04-05 01:03:08 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:03:08.211037 | orchestrator | 2026-04-05 01:03:08 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:03:08.211062 | orchestrator | 2026-04-05 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:11.263419 | orchestrator | 2026-04-05 01:03:11 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state STARTED 2026-04-05 01:03:11.265231 | orchestrator | 2026-04-05 01:03:11 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state STARTED 2026-04-05 01:03:11.265290 | orchestrator | 2026-04-05 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:14.322349 | orchestrator | 2026-04-05 01:03:14 | INFO  | Task 8c012d9f-75c7-447a-9e8a-47e3c6b303aa is in state SUCCESS 2026-04-05 01:03:14.333190 | orchestrator | 2026-04-05 01:03:14.333295 | orchestrator | 2026-04-05 01:03:14.333402 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-05 01:03:14.333423 | orchestrator | 2026-04-05 01:03:14.333453 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-05 01:03:14.333471 | orchestrator | Sunday 05 April 2026 01:02:35 +0000 (0:00:00.211) 0:00:00.211 ********** 2026-04-05 01:03:14.333488 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-05 01:03:14.333507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.333525 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.333543 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:03:14.333560 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.333576 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-05 01:03:14.333593 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-05 01:03:14.333612 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:03:14.334140 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-05 01:03:14.334173 | orchestrator | 2026-04-05 01:03:14.334185 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-05 01:03:14.334196 | orchestrator | Sunday 05 April 2026 01:02:39 +0000 (0:00:04.808) 0:00:05.019 ********** 2026-04-05 01:03:14.334208 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-05 01:03:14.334218 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.334229 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.334240 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:03:14.334251 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.334261 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-05 01:03:14.334272 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-05 01:03:14.334282 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:03:14.334293 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-05 01:03:14.334304 | orchestrator | 2026-04-05 01:03:14.334314 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-05 01:03:14.334325 | orchestrator | Sunday 05 April 2026 01:02:43 +0000 (0:00:04.065) 0:00:09.085 ********** 2026-04-05 01:03:14.334337 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 01:03:14.334348 | orchestrator | 2026-04-05 01:03:14.334359 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-05 01:03:14.334454 | orchestrator | Sunday 05 April 2026 01:02:45 +0000 (0:00:01.091) 0:00:10.176 ********** 2026-04-05 01:03:14.334551 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-05 01:03:14.334565 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.334959 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.334998 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:03:14.335017 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.335035 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-05 01:03:14.335053 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-05 01:03:14.335071 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:03:14.335109 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-05 01:03:14.335129 | orchestrator | 2026-04-05 01:03:14.335149 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-05 01:03:14.335167 | orchestrator | Sunday 05 April 2026 01:03:00 +0000 (0:00:15.312) 0:00:25.488 ********** 2026-04-05 01:03:14.335186 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-05 01:03:14.335206 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-05 01:03:14.335225 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-05 01:03:14.335239 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-05 01:03:14.335303 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-05 01:03:14.335315 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-05 01:03:14.335326 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-05 01:03:14.335337 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-05 01:03:14.335348 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-05 01:03:14.335359 | orchestrator | 2026-04-05 01:03:14.335370 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-05 01:03:14.335380 | orchestrator | Sunday 05 April 2026 01:03:03 +0000 (0:00:03.574) 0:00:29.063 ********** 2026-04-05 01:03:14.335392 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-05 01:03:14.335403 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.335414 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.335424 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:03:14.335435 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 01:03:14.335446 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-05 01:03:14.335457 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-05 01:03:14.335468 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:03:14.335478 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-05 01:03:14.335489 | orchestrator | 2026-04-05 01:03:14.335500 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:03:14.335511 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:03:14.335523 | orchestrator | 2026-04-05 01:03:14.335534 | orchestrator | 2026-04-05 01:03:14.335545 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:03:14.335556 | orchestrator | Sunday 05 April 2026 01:03:10 +0000 (0:00:07.020) 0:00:36.084 ********** 2026-04-05 01:03:14.335582 | orchestrator | =============================================================================== 2026-04-05 01:03:14.335595 | orchestrator | Write ceph keys to the share directory --------------------------------- 15.31s 2026-04-05 01:03:14.335608 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.02s 2026-04-05 01:03:14.335620 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.81s 2026-04-05 01:03:14.335632 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.07s 2026-04-05 01:03:14.335646 | orchestrator | Check if target directories exist --------------------------------------- 3.58s 2026-04-05 01:03:14.335659 | orchestrator | Create share directory -------------------------------------------------- 1.09s 2026-04-05 01:03:14.335747 | orchestrator | 2026-04-05 01:03:14.335761 | orchestrator | 2026-04-05 01:03:14.335772 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-05 01:03:14.335783 | orchestrator | 2026-04-05 01:03:14.335794 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-05 01:03:14.335805 | orchestrator | Sunday 05 April 2026 00:59:51 +0000 (0:00:00.109) 0:00:00.109 ********** 2026-04-05 01:03:14.335816 | orchestrator | ok: [localhost] => { 2026-04-05 01:03:14.335895 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-05 01:03:14.335914 | orchestrator | } 2026-04-05 01:03:14.335933 | orchestrator | 2026-04-05 01:03:14.335945 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-05 01:03:14.335955 | orchestrator | Sunday 05 April 2026 00:59:51 +0000 (0:00:00.052) 0:00:00.162 ********** 2026-04-05 01:03:14.335966 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-05 01:03:14.335977 | orchestrator | ...ignoring 2026-04-05 01:03:14.335988 | orchestrator | 2026-04-05 01:03:14.335999 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-05 01:03:14.336010 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:03.168) 0:00:03.331 ********** 2026-04-05 01:03:14.336020 | orchestrator | skipping: [localhost] 2026-04-05 01:03:14.336031 | orchestrator | 2026-04-05 01:03:14.336041 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-05 01:03:14.336052 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:00.078) 0:00:03.409 ********** 2026-04-05 01:03:14.336072 | orchestrator | ok: [localhost] 2026-04-05 01:03:14.336083 | orchestrator | 2026-04-05 01:03:14.336093 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:03:14.336104 | orchestrator | 2026-04-05 01:03:14.336115 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:03:14.336125 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:00.234) 0:00:03.643 ********** 2026-04-05 01:03:14.336136 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.336146 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:03:14.336157 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:03:14.336168 | orchestrator | 2026-04-05 01:03:14.336178 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:03:14.336196 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:00.381) 0:00:04.024 ********** 2026-04-05 01:03:14.336215 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-05 01:03:14.336233 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-05 01:03:14.336308 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-05 01:03:14.336330 | orchestrator | 2026-04-05 01:03:14.336350 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-05 01:03:14.336368 | orchestrator | 2026-04-05 01:03:14.336384 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-05 01:03:14.336396 | orchestrator | Sunday 05 April 2026 00:59:55 +0000 (0:00:00.436) 0:00:04.460 ********** 2026-04-05 01:03:14.336409 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 01:03:14.336534 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 01:03:14.336550 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 01:03:14.336564 | orchestrator | 2026-04-05 01:03:14.336576 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 01:03:14.336589 | orchestrator | Sunday 05 April 2026 00:59:55 +0000 (0:00:00.384) 0:00:04.845 ********** 2026-04-05 01:03:14.336601 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:03:14.336621 | orchestrator | 2026-04-05 01:03:14.336635 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-05 01:03:14.336646 | orchestrator | Sunday 05 April 2026 00:59:56 +0000 (0:00:00.750) 0:00:05.595 ********** 2026-04-05 01:03:14.336664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 01:03:14.336732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 01:03:14.336757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 01:03:14.336770 | orchestrator | 2026-04-05 01:03:14.336781 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-05 01:03:14.336792 | orchestrator | Sunday 05 April 2026 00:59:59 +0000 (0:00:03.340) 0:00:08.936 ********** 2026-04-05 01:03:14.336803 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.336815 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.336856 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.336869 | orchestrator | 2026-04-05 01:03:14.336880 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-05 01:03:14.336891 | orchestrator | Sunday 05 April 2026 01:00:00 +0000 (0:00:00.710) 0:00:09.646 ********** 2026-04-05 01:03:14.336902 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.336912 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.336923 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.336934 | orchestrator | 2026-04-05 01:03:14.336945 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-05 01:03:14.336956 | orchestrator | Sunday 05 April 2026 01:00:02 +0000 (0:00:01.468) 0:00:11.114 ********** 2026-04-05 01:03:14.337013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 01:03:14.337037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 01:03:14.337095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 01:03:14.337118 | orchestrator | 2026-04-05 01:03:14.337129 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-05 01:03:14.337140 | orchestrator | Sunday 05 April 2026 01:00:05 +0000 (0:00:03.460) 0:00:14.575 ********** 2026-04-05 01:03:14.337151 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.337162 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.337172 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.337183 | orchestrator | 2026-04-05 01:03:14.337194 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-05 01:03:14.337204 | orchestrator | Sunday 05 April 2026 01:00:06 +0000 (0:00:01.100) 0:00:15.675 ********** 2026-04-05 01:03:14.337215 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:03:14.337226 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.337236 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:03:14.337247 | orchestrator | 2026-04-05 01:03:14.337257 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 01:03:14.337268 | orchestrator | Sunday 05 April 2026 01:00:10 +0000 (0:00:03.882) 0:00:19.558 ********** 2026-04-05 01:03:14.337279 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:03:14.337290 | orchestrator | 2026-04-05 01:03:14.337301 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-05 01:03:14.337312 | orchestrator | Sunday 05 April 2026 01:00:11 +0000 (0:00:00.611) 0:00:20.170 ********** 2026-04-05 01:03:14.337324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.337347 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.337387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.337409 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.337428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.337447 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.337465 | orchestrator | 2026-04-05 01:03:14.337483 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-05 01:03:14.337513 | orchestrator | Sunday 05 April 2026 01:00:13 +0000 (0:00:02.161) 0:00:22.332 ********** 2026-04-05 01:03:14.337556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.337579 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.337599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.337619 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.337655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.337690 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.337710 | orchestrator | 2026-04-05 01:03:14.337729 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-05 01:03:14.337748 | orchestrator | Sunday 05 April 2026 01:00:16 +0000 (0:00:03.026) 0:00:25.358 ********** 2026-04-05 01:03:14.337765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.337778 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.337796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.337815 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.337867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.337881 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.337892 | orchestrator | 2026-04-05 01:03:14.337903 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-05 01:03:14.337914 | orchestrator | Sunday 05 April 2026 01:00:20 +0000 (0:00:03.802) 0:00:29.161 ********** 2026-04-05 01:03:14.337932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 01:03:14.337962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 01:03:14.337976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 01:03:14.337995 | orchestrator | 2026-04-05 01:03:14.338011 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-05 01:03:14.338071 | orchestrator | Sunday 05 April 2026 01:00:24 +0000 (0:00:04.146) 0:00:33.307 ********** 2026-04-05 01:03:14.338087 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:03:14.338103 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:03:14.338120 | orchestrator | } 2026-04-05 01:03:14.338136 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:03:14.338154 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:03:14.338174 | orchestrator | } 2026-04-05 01:03:14.338193 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:03:14.338213 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:03:14.338232 | orchestrator | } 2026-04-05 01:03:14.338252 | orchestrator | 2026-04-05 01:03:14.338271 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:03:14.338291 | orchestrator | Sunday 05 April 2026 01:00:24 +0000 (0:00:00.303) 0:00:33.611 ********** 2026-04-05 01:03:14.338328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.338362 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.338399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.338421 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.338454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.338477 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.338497 | orchestrator | 2026-04-05 01:03:14.338516 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-05 01:03:14.338547 | orchestrator | Sunday 05 April 2026 01:00:27 +0000 (0:00:02.660) 0:00:36.271 ********** 2026-04-05 01:03:14.338567 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.338587 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.338607 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.338627 | orchestrator | 2026-04-05 01:03:14.338647 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-05 01:03:14.338667 | orchestrator | Sunday 05 April 2026 01:00:27 +0000 (0:00:00.622) 0:00:36.894 ********** 2026-04-05 01:03:14.338687 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.338706 | orchestrator | 2026-04-05 01:03:14.338726 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-05 01:03:14.338746 | orchestrator | Sunday 05 April 2026 01:00:27 +0000 (0:00:00.124) 0:00:37.018 ********** 2026-04-05 01:03:14.338765 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.338785 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.338805 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.338847 | orchestrator | 2026-04-05 01:03:14.338869 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-05 01:03:14.338889 | orchestrator | Sunday 05 April 2026 01:00:28 +0000 (0:00:00.395) 0:00:37.413 ********** 2026-04-05 01:03:14.338909 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.338929 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.338948 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.338968 | orchestrator | 2026-04-05 01:03:14.338988 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-05 01:03:14.339008 | orchestrator | Sunday 05 April 2026 01:00:28 +0000 (0:00:00.318) 0:00:37.731 ********** 2026-04-05 01:03:14.339028 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.339048 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.339068 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.339088 | orchestrator | 2026-04-05 01:03:14.339107 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-05 01:03:14.339127 | orchestrator | Sunday 05 April 2026 01:00:28 +0000 (0:00:00.331) 0:00:38.063 ********** 2026-04-05 01:03:14.339147 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.339167 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.339187 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.339206 | orchestrator | 2026-04-05 01:03:14.339226 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-05 01:03:14.339244 | orchestrator | Sunday 05 April 2026 01:00:29 +0000 (0:00:00.583) 0:00:38.647 ********** 2026-04-05 01:03:14.339262 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.339280 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.339299 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.339318 | orchestrator | 2026-04-05 01:03:14.339337 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-05 01:03:14.339356 | orchestrator | Sunday 05 April 2026 01:00:29 +0000 (0:00:00.344) 0:00:38.991 ********** 2026-04-05 01:03:14.339375 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.339396 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.339407 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.339418 | orchestrator | 2026-04-05 01:03:14.339428 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-05 01:03:14.339439 | orchestrator | Sunday 05 April 2026 01:00:30 +0000 (0:00:00.328) 0:00:39.320 ********** 2026-04-05 01:03:14.339450 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 01:03:14.339461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 01:03:14.339471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 01:03:14.339482 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.339493 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 01:03:14.339503 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 01:03:14.339523 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 01:03:14.339534 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.339553 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 01:03:14.339564 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 01:03:14.339575 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 01:03:14.339586 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.339596 | orchestrator | 2026-04-05 01:03:14.339607 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-05 01:03:14.339618 | orchestrator | Sunday 05 April 2026 01:00:30 +0000 (0:00:00.436) 0:00:39.757 ********** 2026-04-05 01:03:14.339629 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.339640 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.339650 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.339661 | orchestrator | 2026-04-05 01:03:14.339672 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-05 01:03:14.339683 | orchestrator | Sunday 05 April 2026 01:00:31 +0000 (0:00:00.567) 0:00:40.324 ********** 2026-04-05 01:03:14.339693 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.339704 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.339719 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.339737 | orchestrator | 2026-04-05 01:03:14.339755 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-05 01:03:14.339774 | orchestrator | Sunday 05 April 2026 01:00:31 +0000 (0:00:00.370) 0:00:40.695 ********** 2026-04-05 01:03:14.339794 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.339813 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.340025 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.340052 | orchestrator | 2026-04-05 01:03:14.340064 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-05 01:03:14.340075 | orchestrator | Sunday 05 April 2026 01:00:31 +0000 (0:00:00.347) 0:00:41.043 ********** 2026-04-05 01:03:14.340085 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.340096 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.340107 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.340117 | orchestrator | 2026-04-05 01:03:14.340128 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-05 01:03:14.340139 | orchestrator | Sunday 05 April 2026 01:00:32 +0000 (0:00:00.348) 0:00:41.391 ********** 2026-04-05 01:03:14.340149 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.340160 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.340171 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.340182 | orchestrator | 2026-04-05 01:03:14.340192 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-05 01:03:14.340204 | orchestrator | Sunday 05 April 2026 01:00:32 +0000 (0:00:00.513) 0:00:41.905 ********** 2026-04-05 01:03:14.340214 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.340225 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.340236 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.340246 | orchestrator | 2026-04-05 01:03:14.340257 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-05 01:03:14.340267 | orchestrator | Sunday 05 April 2026 01:00:33 +0000 (0:00:00.319) 0:00:42.225 ********** 2026-04-05 01:03:14.340278 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.340289 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.340299 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.340310 | orchestrator | 2026-04-05 01:03:14.340319 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-05 01:03:14.340329 | orchestrator | Sunday 05 April 2026 01:00:33 +0000 (0:00:00.312) 0:00:42.538 ********** 2026-04-05 01:03:14.340338 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.340348 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.340357 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.340378 | orchestrator | 2026-04-05 01:03:14.340387 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-05 01:03:14.340397 | orchestrator | Sunday 05 April 2026 01:00:33 +0000 (0:00:00.309) 0:00:42.848 ********** 2026-04-05 01:03:14.340429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.340442 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.340452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.340463 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.340485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.340495 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.340505 | orchestrator | 2026-04-05 01:03:14.340521 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-05 01:03:14.340531 | orchestrator | Sunday 05 April 2026 01:00:36 +0000 (0:00:02.262) 0:00:45.110 ********** 2026-04-05 01:03:14.340540 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.340550 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.340560 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.340569 | orchestrator | 2026-04-05 01:03:14.340579 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-05 01:03:14.340588 | orchestrator | Sunday 05 April 2026 01:00:36 +0000 (0:00:00.331) 0:00:45.442 ********** 2026-04-05 01:03:14.340599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.340623 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.340666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) [02026-04-05 01:03:14 | INFO  | Task 325b6600-e709-47a4-b335-835f2bb43dd5 is in state SUCCESS 2026-04-05 01:03:14.340689 | orchestrator | m 2026-04-05 01:03:14.340705 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.340723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 01:03:14.340753 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.340770 | orchestrator | 2026-04-05 01:03:14.340785 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-05 01:03:14.340799 | orchestrator | Sunday 05 April 2026 01:00:38 +0000 (0:00:02.237) 0:00:47.679 ********** 2026-04-05 01:03:14.340814 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.340898 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.340915 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.340929 | orchestrator | 2026-04-05 01:03:14.340939 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-05 01:03:14.340949 | orchestrator | Sunday 05 April 2026 01:00:38 +0000 (0:00:00.338) 0:00:48.017 ********** 2026-04-05 01:03:14.340958 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.340967 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.340977 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.340986 | orchestrator | 2026-04-05 01:03:14.340996 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-05 01:03:14.341005 | orchestrator | Sunday 05 April 2026 01:00:39 +0000 (0:00:00.518) 0:00:48.536 ********** 2026-04-05 01:03:14.341015 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.341024 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.341034 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.341043 | orchestrator | 2026-04-05 01:03:14.341053 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-05 01:03:14.341062 | orchestrator | Sunday 05 April 2026 01:00:39 +0000 (0:00:00.365) 0:00:48.902 ********** 2026-04-05 01:03:14.341072 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.341081 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.341090 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.341100 | orchestrator | 2026-04-05 01:03:14.341109 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-05 01:03:14.341119 | orchestrator | Sunday 05 April 2026 01:00:40 +0000 (0:00:00.515) 0:00:49.417 ********** 2026-04-05 01:03:14.341128 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.341138 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.341147 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.341162 | orchestrator | 2026-04-05 01:03:14.341172 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-05 01:03:14.341182 | orchestrator | Sunday 05 April 2026 01:00:40 +0000 (0:00:00.580) 0:00:49.998 ********** 2026-04-05 01:03:14.341191 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.341201 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:03:14.341210 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:03:14.341219 | orchestrator | 2026-04-05 01:03:14.341229 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-05 01:03:14.341238 | orchestrator | Sunday 05 April 2026 01:00:41 +0000 (0:00:00.948) 0:00:50.946 ********** 2026-04-05 01:03:14.341248 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.341257 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:03:14.341267 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:03:14.341276 | orchestrator | 2026-04-05 01:03:14.341286 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-05 01:03:14.341295 | orchestrator | Sunday 05 April 2026 01:00:42 +0000 (0:00:00.330) 0:00:51.277 ********** 2026-04-05 01:03:14.341312 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.341322 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:03:14.341332 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:03:14.341341 | orchestrator | 2026-04-05 01:03:14.341351 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-05 01:03:14.341360 | orchestrator | Sunday 05 April 2026 01:00:42 +0000 (0:00:00.392) 0:00:51.669 ********** 2026-04-05 01:03:14.341380 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-05 01:03:14.341391 | orchestrator | ...ignoring 2026-04-05 01:03:14.341401 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-05 01:03:14.341410 | orchestrator | ...ignoring 2026-04-05 01:03:14.341420 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-05 01:03:14.341429 | orchestrator | ...ignoring 2026-04-05 01:03:14.341439 | orchestrator | 2026-04-05 01:03:14.341448 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-05 01:03:14.341458 | orchestrator | Sunday 05 April 2026 01:00:53 +0000 (0:00:10.780) 0:01:02.450 ********** 2026-04-05 01:03:14.341467 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.341477 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:03:14.341486 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:03:14.341496 | orchestrator | 2026-04-05 01:03:14.341505 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-05 01:03:14.341515 | orchestrator | Sunday 05 April 2026 01:00:53 +0000 (0:00:00.571) 0:01:03.021 ********** 2026-04-05 01:03:14.341524 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.341534 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.341544 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.341553 | orchestrator | 2026-04-05 01:03:14.341563 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-05 01:03:14.341572 | orchestrator | Sunday 05 April 2026 01:00:54 +0000 (0:00:00.334) 0:01:03.356 ********** 2026-04-05 01:03:14.341582 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.341591 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.341601 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.341610 | orchestrator | 2026-04-05 01:03:14.341620 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-05 01:03:14.341629 | orchestrator | Sunday 05 April 2026 01:00:54 +0000 (0:00:00.364) 0:01:03.720 ********** 2026-04-05 01:03:14.341639 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.341649 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.341658 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.341668 | orchestrator | 2026-04-05 01:03:14.341677 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-05 01:03:14.341687 | orchestrator | Sunday 05 April 2026 01:00:54 +0000 (0:00:00.335) 0:01:04.056 ********** 2026-04-05 01:03:14.341696 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.341706 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:03:14.341715 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:03:14.341724 | orchestrator | 2026-04-05 01:03:14.341734 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-05 01:03:14.341743 | orchestrator | Sunday 05 April 2026 01:00:55 +0000 (0:00:00.529) 0:01:04.585 ********** 2026-04-05 01:03:14.341753 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.341762 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.341772 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.341781 | orchestrator | 2026-04-05 01:03:14.341791 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 01:03:14.341800 | orchestrator | Sunday 05 April 2026 01:00:55 +0000 (0:00:00.359) 0:01:04.945 ********** 2026-04-05 01:03:14.341810 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.341819 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.341855 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-05 01:03:14.341873 | orchestrator | 2026-04-05 01:03:14.341890 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-05 01:03:14.341915 | orchestrator | Sunday 05 April 2026 01:00:56 +0000 (0:00:00.386) 0:01:05.331 ********** 2026-04-05 01:03:14.341929 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.341939 | orchestrator | 2026-04-05 01:03:14.341948 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-05 01:03:14.341958 | orchestrator | Sunday 05 April 2026 01:01:06 +0000 (0:00:10.648) 0:01:15.980 ********** 2026-04-05 01:03:14.341968 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.341977 | orchestrator | 2026-04-05 01:03:14.341987 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 01:03:14.341996 | orchestrator | Sunday 05 April 2026 01:01:07 +0000 (0:00:00.113) 0:01:16.094 ********** 2026-04-05 01:03:14.342006 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.342056 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.342069 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.342078 | orchestrator | 2026-04-05 01:03:14.342088 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-05 01:03:14.342098 | orchestrator | Sunday 05 April 2026 01:01:08 +0000 (0:00:01.301) 0:01:17.395 ********** 2026-04-05 01:03:14.342107 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.342117 | orchestrator | 2026-04-05 01:03:14.342126 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-05 01:03:14.342136 | orchestrator | Sunday 05 April 2026 01:01:16 +0000 (0:00:08.414) 0:01:25.809 ********** 2026-04-05 01:03:14.342146 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.342155 | orchestrator | 2026-04-05 01:03:14.342165 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-05 01:03:14.342174 | orchestrator | Sunday 05 April 2026 01:01:18 +0000 (0:00:01.773) 0:01:27.583 ********** 2026-04-05 01:03:14.342184 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.342194 | orchestrator | 2026-04-05 01:03:14.342211 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-05 01:03:14.342221 | orchestrator | Sunday 05 April 2026 01:01:20 +0000 (0:00:02.142) 0:01:29.726 ********** 2026-04-05 01:03:14.342231 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.342240 | orchestrator | 2026-04-05 01:03:14.342250 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-05 01:03:14.342260 | orchestrator | Sunday 05 April 2026 01:01:20 +0000 (0:00:00.138) 0:01:29.865 ********** 2026-04-05 01:03:14.342269 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.342279 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.342289 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.342298 | orchestrator | 2026-04-05 01:03:14.342308 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-05 01:03:14.342318 | orchestrator | Sunday 05 April 2026 01:01:21 +0000 (0:00:00.571) 0:01:30.437 ********** 2026-04-05 01:03:14.342328 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.342337 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:03:14.342347 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:03:14.342356 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-05 01:03:14.342366 | orchestrator | 2026-04-05 01:03:14.342376 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-05 01:03:14.342385 | orchestrator | skipping: no hosts matched 2026-04-05 01:03:14.342395 | orchestrator | 2026-04-05 01:03:14.342404 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 01:03:14.342414 | orchestrator | 2026-04-05 01:03:14.342423 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 01:03:14.342433 | orchestrator | Sunday 05 April 2026 01:01:21 +0000 (0:00:00.364) 0:01:30.801 ********** 2026-04-05 01:03:14.342442 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:03:14.342452 | orchestrator | 2026-04-05 01:03:14.342461 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 01:03:14.342471 | orchestrator | Sunday 05 April 2026 01:01:40 +0000 (0:00:18.618) 0:01:49.420 ********** 2026-04-05 01:03:14.342491 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:03:14.342501 | orchestrator | 2026-04-05 01:03:14.342510 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 01:03:14.342520 | orchestrator | Sunday 05 April 2026 01:01:55 +0000 (0:00:15.588) 0:02:05.008 ********** 2026-04-05 01:03:14.342529 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:03:14.342539 | orchestrator | 2026-04-05 01:03:14.342548 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 01:03:14.342558 | orchestrator | 2026-04-05 01:03:14.342568 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 01:03:14.342577 | orchestrator | Sunday 05 April 2026 01:01:58 +0000 (0:00:02.278) 0:02:07.287 ********** 2026-04-05 01:03:14.342587 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:03:14.342597 | orchestrator | 2026-04-05 01:03:14.342606 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 01:03:14.342616 | orchestrator | Sunday 05 April 2026 01:02:21 +0000 (0:00:23.014) 0:02:30.301 ********** 2026-04-05 01:03:14.342625 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:03:14.342635 | orchestrator | 2026-04-05 01:03:14.342644 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 01:03:14.342654 | orchestrator | Sunday 05 April 2026 01:02:31 +0000 (0:00:10.635) 0:02:40.937 ********** 2026-04-05 01:03:14.342663 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:03:14.342673 | orchestrator | 2026-04-05 01:03:14.342682 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-05 01:03:14.342692 | orchestrator | 2026-04-05 01:03:14.342702 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 01:03:14.342711 | orchestrator | Sunday 05 April 2026 01:02:34 +0000 (0:00:02.515) 0:02:43.452 ********** 2026-04-05 01:03:14.342721 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.342730 | orchestrator | 2026-04-05 01:03:14.342740 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 01:03:14.342750 | orchestrator | Sunday 05 April 2026 01:02:50 +0000 (0:00:16.318) 0:02:59.770 ********** 2026-04-05 01:03:14.342759 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.342769 | orchestrator | 2026-04-05 01:03:14.342778 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 01:03:14.342788 | orchestrator | Sunday 05 April 2026 01:02:51 +0000 (0:00:00.584) 0:03:00.355 ********** 2026-04-05 01:03:14.342798 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.342807 | orchestrator | 2026-04-05 01:03:14.342817 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-05 01:03:14.342851 | orchestrator | 2026-04-05 01:03:14.342869 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-05 01:03:14.342882 | orchestrator | Sunday 05 April 2026 01:02:53 +0000 (0:00:02.252) 0:03:02.607 ********** 2026-04-05 01:03:14.342892 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:03:14.342901 | orchestrator | 2026-04-05 01:03:14.342911 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-05 01:03:14.342925 | orchestrator | Sunday 05 April 2026 01:02:54 +0000 (0:00:00.541) 0:03:03.148 ********** 2026-04-05 01:03:14.342935 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.342945 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.342954 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.342963 | orchestrator | 2026-04-05 01:03:14.342973 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-05 01:03:14.342983 | orchestrator | Sunday 05 April 2026 01:02:56 +0000 (0:00:02.387) 0:03:05.536 ********** 2026-04-05 01:03:14.342992 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.343002 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.343011 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.343021 | orchestrator | 2026-04-05 01:03:14.343030 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-05 01:03:14.343085 | orchestrator | Sunday 05 April 2026 01:02:58 +0000 (0:00:02.209) 0:03:07.746 ********** 2026-04-05 01:03:14.343095 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.343105 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.343114 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.343123 | orchestrator | 2026-04-05 01:03:14.343140 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-05 01:03:14.343150 | orchestrator | Sunday 05 April 2026 01:03:00 +0000 (0:00:02.111) 0:03:09.857 ********** 2026-04-05 01:03:14.343159 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.343169 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.343178 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:03:14.343188 | orchestrator | 2026-04-05 01:03:14.343197 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-05 01:03:14.343207 | orchestrator | Sunday 05 April 2026 01:03:02 +0000 (0:00:02.217) 0:03:12.075 ********** 2026-04-05 01:03:14.343216 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:03:14.343226 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:03:14.343235 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.343245 | orchestrator | 2026-04-05 01:03:14.343254 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-05 01:03:14.343264 | orchestrator | Sunday 05 April 2026 01:03:08 +0000 (0:00:05.085) 0:03:17.161 ********** 2026-04-05 01:03:14.343273 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.343283 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.343292 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.343302 | orchestrator | 2026-04-05 01:03:14.343311 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-05 01:03:14.343321 | orchestrator | Sunday 05 April 2026 01:03:10 +0000 (0:00:02.138) 0:03:19.300 ********** 2026-04-05 01:03:14.343330 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.343340 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.343349 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.343359 | orchestrator | 2026-04-05 01:03:14.343368 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-05 01:03:14.343378 | orchestrator | Sunday 05 April 2026 01:03:10 +0000 (0:00:00.547) 0:03:19.847 ********** 2026-04-05 01:03:14.343387 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:03:14.343397 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:03:14.343406 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:03:14.343415 | orchestrator | 2026-04-05 01:03:14.343425 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-05 01:03:14.343435 | orchestrator | Sunday 05 April 2026 01:03:13 +0000 (0:00:02.830) 0:03:22.678 ********** 2026-04-05 01:03:14.343444 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:03:14.343454 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:03:14.343463 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:03:14.343473 | orchestrator | 2026-04-05 01:03:14.343482 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:03:14.343492 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-05 01:03:14.343502 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-04-05 01:03:14.343513 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-04-05 01:03:14.343523 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-04-05 01:03:14.343532 | orchestrator | 2026-04-05 01:03:14.343541 | orchestrator | 2026-04-05 01:03:14.343551 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:03:14.343566 | orchestrator | Sunday 05 April 2026 01:03:13 +0000 (0:00:00.234) 0:03:22.912 ********** 2026-04-05 01:03:14.343576 | orchestrator | =============================================================================== 2026-04-05 01:03:14.343585 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.63s 2026-04-05 01:03:14.343595 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.22s 2026-04-05 01:03:14.343604 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.32s 2026-04-05 01:03:14.343613 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.78s 2026-04-05 01:03:14.343623 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.65s 2026-04-05 01:03:14.343632 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.41s 2026-04-05 01:03:14.343642 | orchestrator | service-check : mariadb | Get container facts --------------------------- 5.09s 2026-04-05 01:03:14.343651 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.79s 2026-04-05 01:03:14.343661 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.15s 2026-04-05 01:03:14.343675 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.88s 2026-04-05 01:03:14.343685 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.80s 2026-04-05 01:03:14.343695 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.46s 2026-04-05 01:03:14.343704 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.34s 2026-04-05 01:03:14.343714 | orchestrator | Check MariaDB service --------------------------------------------------- 3.17s 2026-04-05 01:03:14.343723 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.03s 2026-04-05 01:03:14.343732 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.83s 2026-04-05 01:03:14.343742 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.66s 2026-04-05 01:03:14.343751 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.39s 2026-04-05 01:03:14.343767 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.26s 2026-04-05 01:03:14.343776 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.25s 2026-04-05 01:03:14.343786 | orchestrator | 2026-04-05 01:03:14 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:14.343796 | orchestrator | 2026-04-05 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:17.376082 | orchestrator | 2026-04-05 01:03:17 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:17.376480 | orchestrator | 2026-04-05 01:03:17 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:17.376505 | orchestrator | 2026-04-05 01:03:17 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:17.377695 | orchestrator | 2026-04-05 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:20.422782 | orchestrator | 2026-04-05 01:03:20 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:20.422954 | orchestrator | 2026-04-05 01:03:20 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:20.425011 | orchestrator | 2026-04-05 01:03:20 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:20.425099 | orchestrator | 2026-04-05 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:23.464850 | orchestrator | 2026-04-05 01:03:23 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:23.467234 | orchestrator | 2026-04-05 01:03:23 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:23.468071 | orchestrator | 2026-04-05 01:03:23 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:23.468088 | orchestrator | 2026-04-05 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:26.504668 | orchestrator | 2026-04-05 01:03:26 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:26.505091 | orchestrator | 2026-04-05 01:03:26 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:26.506461 | orchestrator | 2026-04-05 01:03:26 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:26.506514 | orchestrator | 2026-04-05 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:29.546972 | orchestrator | 2026-04-05 01:03:29 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:29.548052 | orchestrator | 2026-04-05 01:03:29 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:29.549319 | orchestrator | 2026-04-05 01:03:29 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:29.550509 | orchestrator | 2026-04-05 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:32.585025 | orchestrator | 2026-04-05 01:03:32 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:32.588236 | orchestrator | 2026-04-05 01:03:32 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:32.590937 | orchestrator | 2026-04-05 01:03:32 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:32.590981 | orchestrator | 2026-04-05 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:35.634414 | orchestrator | 2026-04-05 01:03:35 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:35.634514 | orchestrator | 2026-04-05 01:03:35 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:35.636572 | orchestrator | 2026-04-05 01:03:35 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:35.636628 | orchestrator | 2026-04-05 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:38.674661 | orchestrator | 2026-04-05 01:03:38 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:38.674917 | orchestrator | 2026-04-05 01:03:38 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:38.675516 | orchestrator | 2026-04-05 01:03:38 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:38.675529 | orchestrator | 2026-04-05 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:41.718747 | orchestrator | 2026-04-05 01:03:41 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:41.723036 | orchestrator | 2026-04-05 01:03:41 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:41.723106 | orchestrator | 2026-04-05 01:03:41 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:41.723119 | orchestrator | 2026-04-05 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:44.775404 | orchestrator | 2026-04-05 01:03:44 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:44.777193 | orchestrator | 2026-04-05 01:03:44 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:44.777840 | orchestrator | 2026-04-05 01:03:44 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:44.777910 | orchestrator | 2026-04-05 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:47.809354 | orchestrator | 2026-04-05 01:03:47 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:47.809460 | orchestrator | 2026-04-05 01:03:47 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:47.810262 | orchestrator | 2026-04-05 01:03:47 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:47.810291 | orchestrator | 2026-04-05 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:50.851263 | orchestrator | 2026-04-05 01:03:50 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:50.852359 | orchestrator | 2026-04-05 01:03:50 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:50.853276 | orchestrator | 2026-04-05 01:03:50 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:50.853418 | orchestrator | 2026-04-05 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:53.896186 | orchestrator | 2026-04-05 01:03:53 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:53.896279 | orchestrator | 2026-04-05 01:03:53 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:53.898111 | orchestrator | 2026-04-05 01:03:53 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:53.898214 | orchestrator | 2026-04-05 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:56.946254 | orchestrator | 2026-04-05 01:03:56 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:56.948181 | orchestrator | 2026-04-05 01:03:56 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:56.950720 | orchestrator | 2026-04-05 01:03:56 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:56.950936 | orchestrator | 2026-04-05 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:59.985134 | orchestrator | 2026-04-05 01:03:59 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:03:59.986763 | orchestrator | 2026-04-05 01:03:59 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:03:59.987978 | orchestrator | 2026-04-05 01:03:59 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:03:59.988008 | orchestrator | 2026-04-05 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:03.038997 | orchestrator | 2026-04-05 01:04:03 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:03.040183 | orchestrator | 2026-04-05 01:04:03 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:03.040856 | orchestrator | 2026-04-05 01:04:03 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state STARTED 2026-04-05 01:04:03.042971 | orchestrator | 2026-04-05 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:06.087096 | orchestrator | 2026-04-05 01:04:06 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:06.087396 | orchestrator | 2026-04-05 01:04:06 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:06.089929 | orchestrator | 2026-04-05 01:04:06 | INFO  | Task 0bfcdca7-deb4-44d3-a27b-8be2a646655c is in state SUCCESS 2026-04-05 01:04:06.089993 | orchestrator | 2026-04-05 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:09.143145 | orchestrator | 2026-04-05 01:04:09 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:09.143214 | orchestrator | 2026-04-05 01:04:09 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:09.143225 | orchestrator | 2026-04-05 01:04:09 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:09.143510 | orchestrator | 2026-04-05 01:04:09 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:09.144276 | orchestrator | 2026-04-05 01:04:09 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:09.144334 | orchestrator | 2026-04-05 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:12.200209 | orchestrator | 2026-04-05 01:04:12 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:12.202822 | orchestrator | 2026-04-05 01:04:12 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:12.203049 | orchestrator | 2026-04-05 01:04:12 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:12.203871 | orchestrator | 2026-04-05 01:04:12 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:12.205459 | orchestrator | 2026-04-05 01:04:12 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:12.205486 | orchestrator | 2026-04-05 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:15.239429 | orchestrator | 2026-04-05 01:04:15 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:15.240184 | orchestrator | 2026-04-05 01:04:15 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:15.241240 | orchestrator | 2026-04-05 01:04:15 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:15.243379 | orchestrator | 2026-04-05 01:04:15 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:15.244429 | orchestrator | 2026-04-05 01:04:15 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:15.244662 | orchestrator | 2026-04-05 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:18.304153 | orchestrator | 2026-04-05 01:04:18 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:18.305416 | orchestrator | 2026-04-05 01:04:18 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:18.307214 | orchestrator | 2026-04-05 01:04:18 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:18.309050 | orchestrator | 2026-04-05 01:04:18 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:18.310593 | orchestrator | 2026-04-05 01:04:18 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:18.310669 | orchestrator | 2026-04-05 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:21.368329 | orchestrator | 2026-04-05 01:04:21 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:21.368689 | orchestrator | 2026-04-05 01:04:21 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:21.369538 | orchestrator | 2026-04-05 01:04:21 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:21.370839 | orchestrator | 2026-04-05 01:04:21 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:21.372013 | orchestrator | 2026-04-05 01:04:21 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:21.372040 | orchestrator | 2026-04-05 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:24.425455 | orchestrator | 2026-04-05 01:04:24 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:24.426894 | orchestrator | 2026-04-05 01:04:24 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:24.428570 | orchestrator | 2026-04-05 01:04:24 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:24.429832 | orchestrator | 2026-04-05 01:04:24 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:24.431257 | orchestrator | 2026-04-05 01:04:24 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:24.431289 | orchestrator | 2026-04-05 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:27.469527 | orchestrator | 2026-04-05 01:04:27 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:27.470407 | orchestrator | 2026-04-05 01:04:27 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:27.472124 | orchestrator | 2026-04-05 01:04:27 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:27.474378 | orchestrator | 2026-04-05 01:04:27 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:27.475360 | orchestrator | 2026-04-05 01:04:27 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:27.475406 | orchestrator | 2026-04-05 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:30.511993 | orchestrator | 2026-04-05 01:04:30 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:30.514374 | orchestrator | 2026-04-05 01:04:30 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:30.515158 | orchestrator | 2026-04-05 01:04:30 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:30.516067 | orchestrator | 2026-04-05 01:04:30 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:30.517077 | orchestrator | 2026-04-05 01:04:30 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:30.517130 | orchestrator | 2026-04-05 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:33.542573 | orchestrator | 2026-04-05 01:04:33 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:33.544147 | orchestrator | 2026-04-05 01:04:33 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:33.547006 | orchestrator | 2026-04-05 01:04:33 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:33.548194 | orchestrator | 2026-04-05 01:04:33 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:33.548988 | orchestrator | 2026-04-05 01:04:33 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:33.549001 | orchestrator | 2026-04-05 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:36.595546 | orchestrator | 2026-04-05 01:04:36 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:36.596024 | orchestrator | 2026-04-05 01:04:36 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:36.600398 | orchestrator | 2026-04-05 01:04:36 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:36.604880 | orchestrator | 2026-04-05 01:04:36 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:36.607626 | orchestrator | 2026-04-05 01:04:36 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:36.607685 | orchestrator | 2026-04-05 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:39.656198 | orchestrator | 2026-04-05 01:04:39 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:39.656285 | orchestrator | 2026-04-05 01:04:39 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:39.657704 | orchestrator | 2026-04-05 01:04:39 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:39.658300 | orchestrator | 2026-04-05 01:04:39 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:39.659012 | orchestrator | 2026-04-05 01:04:39 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:39.659056 | orchestrator | 2026-04-05 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:42.700225 | orchestrator | 2026-04-05 01:04:42 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:42.701594 | orchestrator | 2026-04-05 01:04:42 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:42.703564 | orchestrator | 2026-04-05 01:04:42 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:42.706420 | orchestrator | 2026-04-05 01:04:42 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:42.707370 | orchestrator | 2026-04-05 01:04:42 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:42.708039 | orchestrator | 2026-04-05 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:45.758580 | orchestrator | 2026-04-05 01:04:45 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:45.759162 | orchestrator | 2026-04-05 01:04:45 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:45.761704 | orchestrator | 2026-04-05 01:04:45 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:45.762345 | orchestrator | 2026-04-05 01:04:45 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:45.763258 | orchestrator | 2026-04-05 01:04:45 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:45.763311 | orchestrator | 2026-04-05 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:48.812877 | orchestrator | 2026-04-05 01:04:48 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:48.816676 | orchestrator | 2026-04-05 01:04:48 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:48.819417 | orchestrator | 2026-04-05 01:04:48 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:48.821142 | orchestrator | 2026-04-05 01:04:48 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:48.823270 | orchestrator | 2026-04-05 01:04:48 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:48.823326 | orchestrator | 2026-04-05 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:51.873120 | orchestrator | 2026-04-05 01:04:51 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:51.873233 | orchestrator | 2026-04-05 01:04:51 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:51.876876 | orchestrator | 2026-04-05 01:04:51 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:51.877026 | orchestrator | 2026-04-05 01:04:51 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:51.878131 | orchestrator | 2026-04-05 01:04:51 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:51.878203 | orchestrator | 2026-04-05 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:54.931375 | orchestrator | 2026-04-05 01:04:54 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:54.933629 | orchestrator | 2026-04-05 01:04:54 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:54.935247 | orchestrator | 2026-04-05 01:04:54 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:54.937788 | orchestrator | 2026-04-05 01:04:54 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:54.939399 | orchestrator | 2026-04-05 01:04:54 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:54.939483 | orchestrator | 2026-04-05 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:57.980125 | orchestrator | 2026-04-05 01:04:57 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:04:57.984950 | orchestrator | 2026-04-05 01:04:57 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state STARTED 2026-04-05 01:04:57.986148 | orchestrator | 2026-04-05 01:04:57 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:04:57.987342 | orchestrator | 2026-04-05 01:04:57 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:04:57.989191 | orchestrator | 2026-04-05 01:04:57 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:04:57.989251 | orchestrator | 2026-04-05 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:01.044411 | orchestrator | 2026-04-05 01:05:01 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:01.044531 | orchestrator | 2026-04-05 01:05:01 | INFO  | Task 7d964fc8-dcb4-482b-9a65-55f849254ede is in state SUCCESS 2026-04-05 01:05:01.044812 | orchestrator | 2026-04-05 01:05:01.044825 | orchestrator | 2026-04-05 01:05:01.044830 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-05 01:05:01.044834 | orchestrator | 2026-04-05 01:05:01.044838 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-05 01:05:01.044842 | orchestrator | Sunday 05 April 2026 01:03:14 +0000 (0:00:00.324) 0:00:00.324 ********** 2026-04-05 01:05:01.044847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-05 01:05:01.044886 | orchestrator | 2026-04-05 01:05:01.044891 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-05 01:05:01.044895 | orchestrator | Sunday 05 April 2026 01:03:15 +0000 (0:00:00.220) 0:00:00.545 ********** 2026-04-05 01:05:01.044899 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-05 01:05:01.044903 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-05 01:05:01.044908 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-05 01:05:01.044915 | orchestrator | 2026-04-05 01:05:01.044921 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-05 01:05:01.044942 | orchestrator | Sunday 05 April 2026 01:03:16 +0000 (0:00:01.527) 0:00:02.073 ********** 2026-04-05 01:05:01.044950 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-05 01:05:01.044957 | orchestrator | 2026-04-05 01:05:01.044964 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-05 01:05:01.044971 | orchestrator | Sunday 05 April 2026 01:03:17 +0000 (0:00:01.027) 0:00:03.101 ********** 2026-04-05 01:05:01.045002 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045006 | orchestrator | 2026-04-05 01:05:01.045010 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-05 01:05:01.045014 | orchestrator | Sunday 05 April 2026 01:03:18 +0000 (0:00:00.766) 0:00:03.867 ********** 2026-04-05 01:05:01.045018 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045022 | orchestrator | 2026-04-05 01:05:01.045025 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-05 01:05:01.045029 | orchestrator | Sunday 05 April 2026 01:03:19 +0000 (0:00:01.020) 0:00:04.888 ********** 2026-04-05 01:05:01.045033 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-05 01:05:01.045037 | orchestrator | ok: [testbed-manager] 2026-04-05 01:05:01.045041 | orchestrator | 2026-04-05 01:05:01.045045 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-05 01:05:01.045048 | orchestrator | Sunday 05 April 2026 01:03:55 +0000 (0:00:36.053) 0:00:40.942 ********** 2026-04-05 01:05:01.045052 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-05 01:05:01.045056 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-05 01:05:01.045060 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-05 01:05:01.045064 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-05 01:05:01.045067 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-05 01:05:01.045071 | orchestrator | 2026-04-05 01:05:01.045075 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-05 01:05:01.045078 | orchestrator | Sunday 05 April 2026 01:03:59 +0000 (0:00:04.105) 0:00:45.047 ********** 2026-04-05 01:05:01.045082 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-05 01:05:01.045086 | orchestrator | 2026-04-05 01:05:01.045090 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-05 01:05:01.045094 | orchestrator | Sunday 05 April 2026 01:04:00 +0000 (0:00:00.710) 0:00:45.758 ********** 2026-04-05 01:05:01.045098 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:01.045102 | orchestrator | 2026-04-05 01:05:01.045105 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-05 01:05:01.045109 | orchestrator | Sunday 05 April 2026 01:04:00 +0000 (0:00:00.146) 0:00:45.905 ********** 2026-04-05 01:05:01.045113 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:01.045117 | orchestrator | 2026-04-05 01:05:01.045120 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-05 01:05:01.045124 | orchestrator | Sunday 05 April 2026 01:04:00 +0000 (0:00:00.311) 0:00:46.216 ********** 2026-04-05 01:05:01.045128 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045132 | orchestrator | 2026-04-05 01:05:01.045135 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-05 01:05:01.045139 | orchestrator | Sunday 05 April 2026 01:04:02 +0000 (0:00:01.453) 0:00:47.669 ********** 2026-04-05 01:05:01.045143 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045147 | orchestrator | 2026-04-05 01:05:01.045150 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-05 01:05:01.045154 | orchestrator | Sunday 05 April 2026 01:04:03 +0000 (0:00:00.728) 0:00:48.398 ********** 2026-04-05 01:05:01.045158 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045162 | orchestrator | 2026-04-05 01:05:01.045165 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-05 01:05:01.045173 | orchestrator | Sunday 05 April 2026 01:04:03 +0000 (0:00:00.570) 0:00:48.969 ********** 2026-04-05 01:05:01.045177 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-05 01:05:01.045181 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-05 01:05:01.045192 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-05 01:05:01.045196 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-05 01:05:01.045200 | orchestrator | 2026-04-05 01:05:01.045204 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:05:01.045207 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:05:01.045211 | orchestrator | 2026-04-05 01:05:01.045215 | orchestrator | 2026-04-05 01:05:01.045225 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:05:01.045229 | orchestrator | Sunday 05 April 2026 01:04:04 +0000 (0:00:01.399) 0:00:50.369 ********** 2026-04-05 01:05:01.045232 | orchestrator | =============================================================================== 2026-04-05 01:05:01.045236 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.05s 2026-04-05 01:05:01.045240 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.11s 2026-04-05 01:05:01.045244 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.53s 2026-04-05 01:05:01.045247 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.45s 2026-04-05 01:05:01.045251 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.40s 2026-04-05 01:05:01.045255 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.03s 2026-04-05 01:05:01.045259 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.02s 2026-04-05 01:05:01.045262 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.77s 2026-04-05 01:05:01.045266 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.73s 2026-04-05 01:05:01.045270 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.71s 2026-04-05 01:05:01.045273 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2026-04-05 01:05:01.045277 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2026-04-05 01:05:01.045281 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-04-05 01:05:01.045285 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-04-05 01:05:01.045288 | orchestrator | 2026-04-05 01:05:01.045292 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 01:05:01.045296 | orchestrator | 2.16.14 2026-04-05 01:05:01.045300 | orchestrator | 2026-04-05 01:05:01.045304 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-05 01:05:01.045307 | orchestrator | 2026-04-05 01:05:01.045311 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-05 01:05:01.045315 | orchestrator | Sunday 05 April 2026 01:04:09 +0000 (0:00:00.227) 0:00:00.227 ********** 2026-04-05 01:05:01.045319 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045322 | orchestrator | 2026-04-05 01:05:01.045326 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-05 01:05:01.045330 | orchestrator | Sunday 05 April 2026 01:04:11 +0000 (0:00:02.347) 0:00:02.575 ********** 2026-04-05 01:05:01.045334 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045337 | orchestrator | 2026-04-05 01:05:01.045341 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-05 01:05:01.045345 | orchestrator | Sunday 05 April 2026 01:04:13 +0000 (0:00:01.700) 0:00:04.275 ********** 2026-04-05 01:05:01.045349 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045353 | orchestrator | 2026-04-05 01:05:01.045356 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-05 01:05:01.045363 | orchestrator | Sunday 05 April 2026 01:04:14 +0000 (0:00:01.400) 0:00:05.676 ********** 2026-04-05 01:05:01.045366 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045370 | orchestrator | 2026-04-05 01:05:01.045374 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-05 01:05:01.045378 | orchestrator | Sunday 05 April 2026 01:04:16 +0000 (0:00:01.354) 0:00:07.030 ********** 2026-04-05 01:05:01.045382 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045385 | orchestrator | 2026-04-05 01:05:01.045389 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-05 01:05:01.045393 | orchestrator | Sunday 05 April 2026 01:04:17 +0000 (0:00:01.179) 0:00:08.210 ********** 2026-04-05 01:05:01.045397 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045400 | orchestrator | 2026-04-05 01:05:01.045404 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-05 01:05:01.045408 | orchestrator | Sunday 05 April 2026 01:04:18 +0000 (0:00:01.367) 0:00:09.577 ********** 2026-04-05 01:05:01.045412 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045415 | orchestrator | 2026-04-05 01:05:01.045419 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-05 01:05:01.045425 | orchestrator | Sunday 05 April 2026 01:04:20 +0000 (0:00:01.997) 0:00:11.575 ********** 2026-04-05 01:05:01.045431 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045438 | orchestrator | 2026-04-05 01:05:01.045444 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-05 01:05:01.045451 | orchestrator | Sunday 05 April 2026 01:04:22 +0000 (0:00:01.314) 0:00:12.889 ********** 2026-04-05 01:05:01.045456 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:01.045460 | orchestrator | 2026-04-05 01:05:01.045465 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-05 01:05:01.045469 | orchestrator | Sunday 05 April 2026 01:04:32 +0000 (0:00:10.348) 0:00:23.237 ********** 2026-04-05 01:05:01.045474 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:01.045478 | orchestrator | 2026-04-05 01:05:01.045483 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 01:05:01.045487 | orchestrator | 2026-04-05 01:05:01.045492 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 01:05:01.045498 | orchestrator | Sunday 05 April 2026 01:04:32 +0000 (0:00:00.138) 0:00:23.376 ********** 2026-04-05 01:05:01.045503 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:01.045508 | orchestrator | 2026-04-05 01:05:01.045513 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 01:05:01.045517 | orchestrator | 2026-04-05 01:05:01.045521 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 01:05:01.045526 | orchestrator | Sunday 05 April 2026 01:04:34 +0000 (0:00:01.934) 0:00:25.310 ********** 2026-04-05 01:05:01.045530 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:01.045535 | orchestrator | 2026-04-05 01:05:01.045542 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 01:05:01.045547 | orchestrator | 2026-04-05 01:05:01.045551 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 01:05:01.045556 | orchestrator | Sunday 05 April 2026 01:04:47 +0000 (0:00:12.641) 0:00:37.952 ********** 2026-04-05 01:05:01.045560 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:01.045565 | orchestrator | 2026-04-05 01:05:01.045569 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:05:01.045573 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 01:05:01.045578 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:05:01.045583 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:05:01.045590 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:05:01.045595 | orchestrator | 2026-04-05 01:05:01.045599 | orchestrator | 2026-04-05 01:05:01.045603 | orchestrator | 2026-04-05 01:05:01.045608 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:05:01.045612 | orchestrator | Sunday 05 April 2026 01:04:58 +0000 (0:00:11.591) 0:00:49.543 ********** 2026-04-05 01:05:01.045617 | orchestrator | =============================================================================== 2026-04-05 01:05:01.045621 | orchestrator | Restart ceph manager service ------------------------------------------- 26.17s 2026-04-05 01:05:01.045626 | orchestrator | Create admin user ------------------------------------------------------ 10.35s 2026-04-05 01:05:01.045630 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.35s 2026-04-05 01:05:01.045635 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.00s 2026-04-05 01:05:01.045639 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.70s 2026-04-05 01:05:01.045644 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.40s 2026-04-05 01:05:01.045648 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.37s 2026-04-05 01:05:01.045653 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.35s 2026-04-05 01:05:01.045657 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.31s 2026-04-05 01:05:01.045662 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.18s 2026-04-05 01:05:01.045666 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2026-04-05 01:05:01.046722 | orchestrator | 2026-04-05 01:05:01 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state STARTED 2026-04-05 01:05:01.047526 | orchestrator | 2026-04-05 01:05:01 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:01.048353 | orchestrator | 2026-04-05 01:05:01 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:05:01.048375 | orchestrator | 2026-04-05 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:04.093768 | orchestrator | 2026-04-05 01:05:04 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:04.100240 | orchestrator | 2026-04-05 01:05:04.100328 | orchestrator | 2026-04-05 01:05:04.100344 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:05:04.100357 | orchestrator | 2026-04-05 01:05:04.100368 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:05:04.100498 | orchestrator | Sunday 05 April 2026 01:03:17 +0000 (0:00:00.304) 0:00:00.304 ********** 2026-04-05 01:05:04.100510 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.100550 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.100562 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.100573 | orchestrator | 2026-04-05 01:05:04.100584 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:05:04.100595 | orchestrator | Sunday 05 April 2026 01:03:17 +0000 (0:00:00.269) 0:00:00.574 ********** 2026-04-05 01:05:04.100606 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-05 01:05:04.100617 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-05 01:05:04.100628 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-05 01:05:04.100638 | orchestrator | 2026-04-05 01:05:04.100649 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-05 01:05:04.100660 | orchestrator | 2026-04-05 01:05:04.100671 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 01:05:04.100681 | orchestrator | Sunday 05 April 2026 01:03:17 +0000 (0:00:00.307) 0:00:00.881 ********** 2026-04-05 01:05:04.100717 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:05:04.100729 | orchestrator | 2026-04-05 01:05:04.100755 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-05 01:05:04.100767 | orchestrator | Sunday 05 April 2026 01:03:18 +0000 (0:00:00.550) 0:00:01.432 ********** 2026-04-05 01:05:04.100788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:05:04.100838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:05:04.100863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:05:04.100881 | orchestrator | 2026-04-05 01:05:04.100900 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-05 01:05:04.100920 | orchestrator | Sunday 05 April 2026 01:03:20 +0000 (0:00:01.749) 0:00:03.181 ********** 2026-04-05 01:05:04.100946 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.100968 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.101015 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.101034 | orchestrator | 2026-04-05 01:05:04.101052 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 01:05:04.101083 | orchestrator | Sunday 05 April 2026 01:03:20 +0000 (0:00:00.338) 0:00:03.520 ********** 2026-04-05 01:05:04.101101 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 01:05:04.101119 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 01:05:04.101138 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 01:05:04.101170 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 01:05:04.101189 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 01:05:04.101209 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 01:05:04.101227 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-05 01:05:04.101246 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 01:05:04.101263 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 01:05:04.101282 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 01:05:04.101299 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 01:05:04.101326 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 01:05:04.101378 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 01:05:04.101398 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 01:05:04.101416 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-05 01:05:04.101434 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 01:05:04.101452 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 01:05:04.101470 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 01:05:04.101488 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 01:05:04.101506 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 01:05:04.101525 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 01:05:04.101544 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 01:05:04.101563 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-05 01:05:04.101579 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 01:05:04.101597 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-05 01:05:04.101617 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-05 01:05:04.101634 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-05 01:05:04.101653 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-05 01:05:04.101672 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-05 01:05:04.101688 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-05 01:05:04.101706 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-05 01:05:04.101724 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-05 01:05:04.101741 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-05 01:05:04.101797 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-05 01:05:04.101816 | orchestrator | 2026-04-05 01:05:04.101835 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:05:04.101853 | orchestrator | Sunday 05 April 2026 01:03:21 +0000 (0:00:00.942) 0:00:04.462 ********** 2026-04-05 01:05:04.101871 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.101913 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.101949 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.101969 | orchestrator | 2026-04-05 01:05:04.102113 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:05:04.102136 | orchestrator | Sunday 05 April 2026 01:03:21 +0000 (0:00:00.384) 0:00:04.847 ********** 2026-04-05 01:05:04.102154 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.102166 | orchestrator | 2026-04-05 01:05:04.102177 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:05:04.102188 | orchestrator | Sunday 05 April 2026 01:03:22 +0000 (0:00:00.154) 0:00:05.001 ********** 2026-04-05 01:05:04.102199 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.102210 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.102220 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.102231 | orchestrator | 2026-04-05 01:05:04.102242 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:05:04.102253 | orchestrator | Sunday 05 April 2026 01:03:22 +0000 (0:00:00.349) 0:00:05.351 ********** 2026-04-05 01:05:04.102263 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.102278 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.102297 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.102308 | orchestrator | 2026-04-05 01:05:04.102319 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:05:04.102330 | orchestrator | Sunday 05 April 2026 01:03:22 +0000 (0:00:00.352) 0:00:05.704 ********** 2026-04-05 01:05:04.102340 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.102351 | orchestrator | 2026-04-05 01:05:04.102361 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:05:04.102372 | orchestrator | Sunday 05 April 2026 01:03:22 +0000 (0:00:00.121) 0:00:05.825 ********** 2026-04-05 01:05:04.102400 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.102411 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.102422 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.102433 | orchestrator | 2026-04-05 01:05:04.102443 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:05:04.102454 | orchestrator | Sunday 05 April 2026 01:03:23 +0000 (0:00:00.458) 0:00:06.284 ********** 2026-04-05 01:05:04.102464 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.102475 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.102486 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.102497 | orchestrator | 2026-04-05 01:05:04.102507 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:05:04.102518 | orchestrator | Sunday 05 April 2026 01:03:23 +0000 (0:00:00.395) 0:00:06.680 ********** 2026-04-05 01:05:04.102528 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.102539 | orchestrator | 2026-04-05 01:05:04.102550 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:05:04.102561 | orchestrator | Sunday 05 April 2026 01:03:23 +0000 (0:00:00.140) 0:00:06.820 ********** 2026-04-05 01:05:04.102571 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.102582 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.102593 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.102604 | orchestrator | 2026-04-05 01:05:04.102614 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:05:04.102625 | orchestrator | Sunday 05 April 2026 01:03:24 +0000 (0:00:00.271) 0:00:07.092 ********** 2026-04-05 01:05:04.102647 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.102657 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.102668 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.102678 | orchestrator | 2026-04-05 01:05:04.102689 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:05:04.102700 | orchestrator | Sunday 05 April 2026 01:03:24 +0000 (0:00:00.304) 0:00:07.397 ********** 2026-04-05 01:05:04.102710 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.102721 | orchestrator | 2026-04-05 01:05:04.102732 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:05:04.102742 | orchestrator | Sunday 05 April 2026 01:03:24 +0000 (0:00:00.121) 0:00:07.518 ********** 2026-04-05 01:05:04.102753 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.102763 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.102774 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.102785 | orchestrator | 2026-04-05 01:05:04.102795 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:05:04.102806 | orchestrator | Sunday 05 April 2026 01:03:24 +0000 (0:00:00.385) 0:00:07.903 ********** 2026-04-05 01:05:04.102819 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.102838 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.102856 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.102874 | orchestrator | 2026-04-05 01:05:04.102892 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:05:04.102909 | orchestrator | Sunday 05 April 2026 01:03:25 +0000 (0:00:00.283) 0:00:08.187 ********** 2026-04-05 01:05:04.102926 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.102944 | orchestrator | 2026-04-05 01:05:04.102962 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:05:04.103056 | orchestrator | Sunday 05 April 2026 01:03:25 +0000 (0:00:00.126) 0:00:08.314 ********** 2026-04-05 01:05:04.103079 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.103097 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.103117 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.103135 | orchestrator | 2026-04-05 01:05:04.103153 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:05:04.103171 | orchestrator | Sunday 05 April 2026 01:03:25 +0000 (0:00:00.374) 0:00:08.688 ********** 2026-04-05 01:05:04.103188 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.103207 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.103225 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.103245 | orchestrator | 2026-04-05 01:05:04.103263 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:05:04.103283 | orchestrator | Sunday 05 April 2026 01:03:26 +0000 (0:00:00.395) 0:00:09.084 ********** 2026-04-05 01:05:04.103301 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.103318 | orchestrator | 2026-04-05 01:05:04.103337 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:05:04.103357 | orchestrator | Sunday 05 April 2026 01:03:26 +0000 (0:00:00.120) 0:00:09.205 ********** 2026-04-05 01:05:04.103375 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.103390 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.103409 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.103419 | orchestrator | 2026-04-05 01:05:04.103429 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:05:04.103439 | orchestrator | Sunday 05 April 2026 01:03:26 +0000 (0:00:00.347) 0:00:09.552 ********** 2026-04-05 01:05:04.103448 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.103458 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.103468 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.103477 | orchestrator | 2026-04-05 01:05:04.103487 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:05:04.103496 | orchestrator | Sunday 05 April 2026 01:03:26 +0000 (0:00:00.363) 0:00:09.916 ********** 2026-04-05 01:05:04.103506 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.103526 | orchestrator | 2026-04-05 01:05:04.103536 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:05:04.103545 | orchestrator | Sunday 05 April 2026 01:03:27 +0000 (0:00:00.215) 0:00:10.132 ********** 2026-04-05 01:05:04.103554 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.103564 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.103573 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.103583 | orchestrator | 2026-04-05 01:05:04.103592 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:05:04.103602 | orchestrator | Sunday 05 April 2026 01:03:27 +0000 (0:00:00.337) 0:00:10.469 ********** 2026-04-05 01:05:04.103612 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.103621 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.103631 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.103640 | orchestrator | 2026-04-05 01:05:04.103657 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:05:04.103667 | orchestrator | Sunday 05 April 2026 01:03:28 +0000 (0:00:00.723) 0:00:11.193 ********** 2026-04-05 01:05:04.103676 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.103686 | orchestrator | 2026-04-05 01:05:04.103695 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:05:04.103705 | orchestrator | Sunday 05 April 2026 01:03:28 +0000 (0:00:00.147) 0:00:11.340 ********** 2026-04-05 01:05:04.103714 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.103724 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.103734 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.103743 | orchestrator | 2026-04-05 01:05:04.103753 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:05:04.103762 | orchestrator | Sunday 05 April 2026 01:03:28 +0000 (0:00:00.428) 0:00:11.768 ********** 2026-04-05 01:05:04.103772 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.103782 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.103791 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.103801 | orchestrator | 2026-04-05 01:05:04.103810 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:05:04.103820 | orchestrator | Sunday 05 April 2026 01:03:29 +0000 (0:00:00.500) 0:00:12.269 ********** 2026-04-05 01:05:04.103829 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.103839 | orchestrator | 2026-04-05 01:05:04.103848 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:05:04.103858 | orchestrator | Sunday 05 April 2026 01:03:29 +0000 (0:00:00.179) 0:00:12.449 ********** 2026-04-05 01:05:04.103867 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.103877 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.103886 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.103895 | orchestrator | 2026-04-05 01:05:04.103905 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:05:04.103914 | orchestrator | Sunday 05 April 2026 01:03:30 +0000 (0:00:00.575) 0:00:13.024 ********** 2026-04-05 01:05:04.103924 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:04.103933 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:04.103943 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:04.103952 | orchestrator | 2026-04-05 01:05:04.103962 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:05:04.103971 | orchestrator | Sunday 05 April 2026 01:03:30 +0000 (0:00:00.317) 0:00:13.342 ********** 2026-04-05 01:05:04.104011 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.104029 | orchestrator | 2026-04-05 01:05:04.104042 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:05:04.104051 | orchestrator | Sunday 05 April 2026 01:03:30 +0000 (0:00:00.194) 0:00:13.537 ********** 2026-04-05 01:05:04.104061 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.104070 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.104080 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.104089 | orchestrator | 2026-04-05 01:05:04.104106 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-05 01:05:04.104120 | orchestrator | Sunday 05 April 2026 01:03:30 +0000 (0:00:00.275) 0:00:13.812 ********** 2026-04-05 01:05:04.104142 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:04.104165 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:04.104180 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:04.104196 | orchestrator | 2026-04-05 01:05:04.104211 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-05 01:05:04.104228 | orchestrator | Sunday 05 April 2026 01:03:32 +0000 (0:00:01.775) 0:00:15.588 ********** 2026-04-05 01:05:04.104239 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 01:05:04.104249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 01:05:04.104259 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 01:05:04.104268 | orchestrator | 2026-04-05 01:05:04.104278 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-05 01:05:04.104287 | orchestrator | Sunday 05 April 2026 01:03:35 +0000 (0:00:02.884) 0:00:18.472 ********** 2026-04-05 01:05:04.104297 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 01:05:04.104308 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 01:05:04.104327 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 01:05:04.104337 | orchestrator | 2026-04-05 01:05:04.104347 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-05 01:05:04.104357 | orchestrator | Sunday 05 April 2026 01:03:37 +0000 (0:00:02.424) 0:00:20.897 ********** 2026-04-05 01:05:04.104367 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 01:05:04.104377 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 01:05:04.104386 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 01:05:04.104396 | orchestrator | 2026-04-05 01:05:04.104405 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-05 01:05:04.104415 | orchestrator | Sunday 05 April 2026 01:03:39 +0000 (0:00:01.624) 0:00:22.522 ********** 2026-04-05 01:05:04.104424 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.104434 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.104444 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.104453 | orchestrator | 2026-04-05 01:05:04.104463 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-05 01:05:04.104472 | orchestrator | Sunday 05 April 2026 01:03:39 +0000 (0:00:00.299) 0:00:22.821 ********** 2026-04-05 01:05:04.104482 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.104499 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.104509 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.104519 | orchestrator | 2026-04-05 01:05:04.104528 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 01:05:04.104538 | orchestrator | Sunday 05 April 2026 01:03:40 +0000 (0:00:00.517) 0:00:23.338 ********** 2026-04-05 01:05:04.104547 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:05:04.104557 | orchestrator | 2026-04-05 01:05:04.104567 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-05 01:05:04.104576 | orchestrator | Sunday 05 April 2026 01:03:40 +0000 (0:00:00.613) 0:00:23.952 ********** 2026-04-05 01:05:04.104591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:05:04.104630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:05:04.104656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:05:04.104667 | orchestrator | 2026-04-05 01:05:04.104677 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-05 01:05:04.104687 | orchestrator | Sunday 05 April 2026 01:03:42 +0000 (0:00:01.605) 0:00:25.558 ********** 2026-04-05 01:05:04.104703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:05:04.104720 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.104737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '82026-04-05 01:05:04 | INFO  | Task 4e7fbe71-7fdd-457d-a52f-862740e505db is in state SUCCESS 2026-04-05 01:05:04.104750 | orchestrator | 0', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:05:04.104761 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.104778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:05:04.104795 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.104804 | orchestrator | 2026-04-05 01:05:04.104814 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-05 01:05:04.104824 | orchestrator | Sunday 05 April 2026 01:03:43 +0000 (0:00:00.930) 0:00:26.489 ********** 2026-04-05 01:05:04.104847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:05:04.104864 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.104874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:05:04.104885 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.104913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:05:04.104929 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.104939 | orchestrator | 2026-04-05 01:05:04.104949 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-05 01:05:04.104958 | orchestrator | Sunday 05 April 2026 01:03:44 +0000 (0:00:01.093) 0:00:27.582 ********** 2026-04-05 01:05:04.104969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:05:04.105014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:05:04.105071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:05:04.105084 | orchestrator | 2026-04-05 01:05:04.105094 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-05 01:05:04.105104 | orchestrator | Sunday 05 April 2026 01:03:45 +0000 (0:00:01.361) 0:00:28.944 ********** 2026-04-05 01:05:04.105114 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:05:04.105123 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:04.105133 | orchestrator | } 2026-04-05 01:05:04.105143 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:05:04.105152 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:04.105162 | orchestrator | } 2026-04-05 01:05:04.105179 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:05:04.105188 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:04.105198 | orchestrator | } 2026-04-05 01:05:04.105207 | orchestrator | 2026-04-05 01:05:04.105217 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:05:04.105227 | orchestrator | Sunday 05 April 2026 01:03:46 +0000 (0:00:00.339) 0:00:29.283 ********** 2026-04-05 01:05:04.105242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:05:04.105253 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.105277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:05:04.105294 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.105304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:05:04.105315 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.105325 | orchestrator | 2026-04-05 01:05:04.105334 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 01:05:04.105344 | orchestrator | Sunday 05 April 2026 01:03:47 +0000 (0:00:01.539) 0:00:30.823 ********** 2026-04-05 01:05:04.105353 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:04.105363 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:04.105373 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:04.105383 | orchestrator | 2026-04-05 01:05:04.105392 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 01:05:04.105402 | orchestrator | Sunday 05 April 2026 01:03:48 +0000 (0:00:00.298) 0:00:31.121 ********** 2026-04-05 01:05:04.105417 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:05:04.105433 | orchestrator | 2026-04-05 01:05:04.105443 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-05 01:05:04.105452 | orchestrator | Sunday 05 April 2026 01:03:48 +0000 (0:00:00.770) 0:00:31.891 ********** 2026-04-05 01:05:04.105462 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:04.105471 | orchestrator | 2026-04-05 01:05:04.105481 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-05 01:05:04.105491 | orchestrator | Sunday 05 April 2026 01:03:51 +0000 (0:00:02.349) 0:00:34.241 ********** 2026-04-05 01:05:04.105500 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:04.105510 | orchestrator | 2026-04-05 01:05:04.105519 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-05 01:05:04.105529 | orchestrator | Sunday 05 April 2026 01:03:53 +0000 (0:00:02.205) 0:00:36.447 ********** 2026-04-05 01:05:04.105539 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:04.105548 | orchestrator | 2026-04-05 01:05:04.105558 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 01:05:04.105567 | orchestrator | Sunday 05 April 2026 01:04:10 +0000 (0:00:17.438) 0:00:53.885 ********** 2026-04-05 01:05:04.105577 | orchestrator | 2026-04-05 01:05:04.105587 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 01:05:04.105596 | orchestrator | Sunday 05 April 2026 01:04:10 +0000 (0:00:00.062) 0:00:53.948 ********** 2026-04-05 01:05:04.105606 | orchestrator | 2026-04-05 01:05:04.105615 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 01:05:04.105630 | orchestrator | Sunday 05 April 2026 01:04:11 +0000 (0:00:00.059) 0:00:54.007 ********** 2026-04-05 01:05:04.105640 | orchestrator | 2026-04-05 01:05:04.105650 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-05 01:05:04.105660 | orchestrator | Sunday 05 April 2026 01:04:11 +0000 (0:00:00.060) 0:00:54.068 ********** 2026-04-05 01:05:04.105669 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:04.105679 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:04.105688 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:04.105698 | orchestrator | 2026-04-05 01:05:04.105707 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:05:04.105717 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-04-05 01:05:04.105728 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-05 01:05:04.105738 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-05 01:05:04.105747 | orchestrator | 2026-04-05 01:05:04.105757 | orchestrator | 2026-04-05 01:05:04.105767 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:05:04.105777 | orchestrator | Sunday 05 April 2026 01:05:01 +0000 (0:00:50.771) 0:01:44.839 ********** 2026-04-05 01:05:04.105787 | orchestrator | =============================================================================== 2026-04-05 01:05:04.105797 | orchestrator | horizon : Restart horizon container ------------------------------------ 50.77s 2026-04-05 01:05:04.105806 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.44s 2026-04-05 01:05:04.105816 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.88s 2026-04-05 01:05:04.105826 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.42s 2026-04-05 01:05:04.105835 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.35s 2026-04-05 01:05:04.105845 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.21s 2026-04-05 01:05:04.105854 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.78s 2026-04-05 01:05:04.105864 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.75s 2026-04-05 01:05:04.105880 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.62s 2026-04-05 01:05:04.105890 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.61s 2026-04-05 01:05:04.105899 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.54s 2026-04-05 01:05:04.105909 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.36s 2026-04-05 01:05:04.105918 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.09s 2026-04-05 01:05:04.105927 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.94s 2026-04-05 01:05:04.105937 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.93s 2026-04-05 01:05:04.105947 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2026-04-05 01:05:04.105956 | orchestrator | horizon : Update policy file name --------------------------------------- 0.72s 2026-04-05 01:05:04.105966 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-04-05 01:05:04.105976 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2026-04-05 01:05:04.106115 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-04-05 01:05:04.106126 | orchestrator | 2026-04-05 01:05:04 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:04.106136 | orchestrator | 2026-04-05 01:05:04 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:05:04.106153 | orchestrator | 2026-04-05 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:07.158868 | orchestrator | 2026-04-05 01:05:07 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:07.159527 | orchestrator | 2026-04-05 01:05:07 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:07.160481 | orchestrator | 2026-04-05 01:05:07 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:05:07.160668 | orchestrator | 2026-04-05 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:10.198868 | orchestrator | 2026-04-05 01:05:10 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:10.200442 | orchestrator | 2026-04-05 01:05:10 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:10.203418 | orchestrator | 2026-04-05 01:05:10 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:05:10.204248 | orchestrator | 2026-04-05 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:13.245529 | orchestrator | 2026-04-05 01:05:13 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:13.245781 | orchestrator | 2026-04-05 01:05:13 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:13.246275 | orchestrator | 2026-04-05 01:05:13 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:05:13.246295 | orchestrator | 2026-04-05 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:16.286543 | orchestrator | 2026-04-05 01:05:16 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:16.287463 | orchestrator | 2026-04-05 01:05:16 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:16.288439 | orchestrator | 2026-04-05 01:05:16 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:05:16.288506 | orchestrator | 2026-04-05 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:19.330875 | orchestrator | 2026-04-05 01:05:19 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:19.336744 | orchestrator | 2026-04-05 01:05:19 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:19.338403 | orchestrator | 2026-04-05 01:05:19 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:05:19.338458 | orchestrator | 2026-04-05 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:22.393917 | orchestrator | 2026-04-05 01:05:22 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:22.394238 | orchestrator | 2026-04-05 01:05:22 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:22.394799 | orchestrator | 2026-04-05 01:05:22 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:05:22.394923 | orchestrator | 2026-04-05 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:25.432755 | orchestrator | 2026-04-05 01:05:25 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:25.433717 | orchestrator | 2026-04-05 01:05:25 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:25.435716 | orchestrator | 2026-04-05 01:05:25 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state STARTED 2026-04-05 01:05:25.435788 | orchestrator | 2026-04-05 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:28.485545 | orchestrator | 2026-04-05 01:05:28 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:05:28.487443 | orchestrator | 2026-04-05 01:05:28 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:28.488222 | orchestrator | 2026-04-05 01:05:28 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:05:28.490545 | orchestrator | 2026-04-05 01:05:28 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:28.493314 | orchestrator | 2026-04-05 01:05:28 | INFO  | Task 1e0d4be0-510a-4151-a8f6-aa219e662c5b is in state SUCCESS 2026-04-05 01:05:28.494340 | orchestrator | 2026-04-05 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:31.531779 | orchestrator | 2026-04-05 01:05:31 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:05:31.533473 | orchestrator | 2026-04-05 01:05:31 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:31.535099 | orchestrator | 2026-04-05 01:05:31 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:05:31.537350 | orchestrator | 2026-04-05 01:05:31 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:31.537624 | orchestrator | 2026-04-05 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:34.579302 | orchestrator | 2026-04-05 01:05:34 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:05:34.580698 | orchestrator | 2026-04-05 01:05:34 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:34.582546 | orchestrator | 2026-04-05 01:05:34 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:05:34.584349 | orchestrator | 2026-04-05 01:05:34 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:34.584558 | orchestrator | 2026-04-05 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:37.632589 | orchestrator | 2026-04-05 01:05:37 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:05:37.634719 | orchestrator | 2026-04-05 01:05:37 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:37.637141 | orchestrator | 2026-04-05 01:05:37 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:05:37.639977 | orchestrator | 2026-04-05 01:05:37 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:37.640108 | orchestrator | 2026-04-05 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:40.680159 | orchestrator | 2026-04-05 01:05:40 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:05:40.680875 | orchestrator | 2026-04-05 01:05:40 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:40.681782 | orchestrator | 2026-04-05 01:05:40 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:05:40.682752 | orchestrator | 2026-04-05 01:05:40 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:40.682797 | orchestrator | 2026-04-05 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:43.730834 | orchestrator | 2026-04-05 01:05:43 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:05:43.736830 | orchestrator | 2026-04-05 01:05:43 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:43.738672 | orchestrator | 2026-04-05 01:05:43 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:05:43.742835 | orchestrator | 2026-04-05 01:05:43 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:43.742863 | orchestrator | 2026-04-05 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:46.804843 | orchestrator | 2026-04-05 01:05:46 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:05:46.805649 | orchestrator | 2026-04-05 01:05:46 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:46.807413 | orchestrator | 2026-04-05 01:05:46 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:05:46.808626 | orchestrator | 2026-04-05 01:05:46 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:46.808713 | orchestrator | 2026-04-05 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:49.847327 | orchestrator | 2026-04-05 01:05:49 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:05:49.847417 | orchestrator | 2026-04-05 01:05:49 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state STARTED 2026-04-05 01:05:49.848924 | orchestrator | 2026-04-05 01:05:49 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:05:49.850460 | orchestrator | 2026-04-05 01:05:49 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:49.850490 | orchestrator | 2026-04-05 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:52.897393 | orchestrator | 2026-04-05 01:05:52 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:05:52.899583 | orchestrator | 2026-04-05 01:05:52 | INFO  | Task 9dfd9f38-3358-4281-aa55-b99ad7b32917 is in state SUCCESS 2026-04-05 01:05:52.900483 | orchestrator | 2026-04-05 01:05:52.900519 | orchestrator | 2026-04-05 01:05:52.900530 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:05:52.900540 | orchestrator | 2026-04-05 01:05:52.900553 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:05:52.900599 | orchestrator | Sunday 05 April 2026 01:04:08 +0000 (0:00:00.179) 0:00:00.179 ********** 2026-04-05 01:05:52.900864 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:52.900882 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:52.900892 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:52.900901 | orchestrator | 2026-04-05 01:05:52.900911 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:05:52.900921 | orchestrator | Sunday 05 April 2026 01:04:08 +0000 (0:00:00.344) 0:00:00.526 ********** 2026-04-05 01:05:52.900930 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-05 01:05:52.900941 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-05 01:05:52.900950 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-05 01:05:52.900960 | orchestrator | 2026-04-05 01:05:52.900969 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-05 01:05:52.900979 | orchestrator | 2026-04-05 01:05:52.900988 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-05 01:05:52.900997 | orchestrator | Sunday 05 April 2026 01:04:09 +0000 (0:00:00.646) 0:00:01.173 ********** 2026-04-05 01:05:52.901007 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:52.901016 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:52.901026 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:52.901035 | orchestrator | 2026-04-05 01:05:52.901056 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:05:52.901090 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:05:52.901104 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:05:52.901115 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:05:52.901126 | orchestrator | 2026-04-05 01:05:52.901138 | orchestrator | 2026-04-05 01:05:52.901149 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:05:52.901160 | orchestrator | Sunday 05 April 2026 01:05:26 +0000 (0:01:17.179) 0:01:18.353 ********** 2026-04-05 01:05:52.901172 | orchestrator | =============================================================================== 2026-04-05 01:05:52.901183 | orchestrator | Waiting for Keystone public port to be UP ------------------------------ 77.18s 2026-04-05 01:05:52.901194 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-04-05 01:05:52.901205 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-04-05 01:05:52.901216 | orchestrator | 2026-04-05 01:05:52.901228 | orchestrator | 2026-04-05 01:05:52.901239 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:05:52.901251 | orchestrator | 2026-04-05 01:05:52.901262 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:05:52.901284 | orchestrator | Sunday 05 April 2026 01:03:17 +0000 (0:00:00.330) 0:00:00.330 ********** 2026-04-05 01:05:52.901302 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:52.901311 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:52.901321 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:52.901330 | orchestrator | 2026-04-05 01:05:52.901340 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:05:52.901349 | orchestrator | Sunday 05 April 2026 01:03:17 +0000 (0:00:00.286) 0:00:00.616 ********** 2026-04-05 01:05:52.901359 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-05 01:05:52.901368 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-05 01:05:52.901378 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-05 01:05:52.901387 | orchestrator | 2026-04-05 01:05:52.901397 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-05 01:05:52.901406 | orchestrator | 2026-04-05 01:05:52.901416 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:05:52.901435 | orchestrator | Sunday 05 April 2026 01:03:17 +0000 (0:00:00.284) 0:00:00.901 ********** 2026-04-05 01:05:52.901445 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:05:52.901454 | orchestrator | 2026-04-05 01:05:52.901464 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-05 01:05:52.901473 | orchestrator | Sunday 05 April 2026 01:03:18 +0000 (0:00:00.644) 0:00:01.546 ********** 2026-04-05 01:05:52.901503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.901523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.901536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.901553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.901580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.901613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.901636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.901663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.901680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.901698 | orchestrator | 2026-04-05 01:05:52.901715 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-05 01:05:52.901732 | orchestrator | Sunday 05 April 2026 01:03:21 +0000 (0:00:02.518) 0:00:04.064 ********** 2026-04-05 01:05:52.901749 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.901766 | orchestrator | 2026-04-05 01:05:52.901782 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-05 01:05:52.901795 | orchestrator | Sunday 05 April 2026 01:03:21 +0000 (0:00:00.118) 0:00:04.183 ********** 2026-04-05 01:05:52.901813 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.901823 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.901833 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.901842 | orchestrator | 2026-04-05 01:05:52.901852 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-05 01:05:52.901861 | orchestrator | Sunday 05 April 2026 01:03:21 +0000 (0:00:00.307) 0:00:04.491 ********** 2026-04-05 01:05:52.901871 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:05:52.901880 | orchestrator | 2026-04-05 01:05:52.901890 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:05:52.901899 | orchestrator | Sunday 05 April 2026 01:03:22 +0000 (0:00:01.037) 0:00:05.529 ********** 2026-04-05 01:05:52.901909 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:05:52.901918 | orchestrator | 2026-04-05 01:05:52.901928 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-05 01:05:52.901937 | orchestrator | Sunday 05 April 2026 01:03:23 +0000 (0:00:00.688) 0:00:06.217 ********** 2026-04-05 01:05:52.901955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.901972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.901984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.902001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902194 | orchestrator | 2026-04-05 01:05:52.902205 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-05 01:05:52.902216 | orchestrator | Sunday 05 April 2026 01:03:26 +0000 (0:00:03.347) 0:00:09.565 ********** 2026-04-05 01:05:52.902228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.902240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.902258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.902269 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.902286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.902299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.902317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.902328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.902339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.902357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.902368 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.902379 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.902390 | orchestrator | 2026-04-05 01:05:52.902401 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-05 01:05:52.902412 | orchestrator | Sunday 05 April 2026 01:03:27 +0000 (0:00:00.725) 0:00:10.290 ********** 2026-04-05 01:05:52.902428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.902446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.902458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.902470 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.902487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.902499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.902516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.902534 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.902545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.902557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.902569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.902580 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.902591 | orchestrator | 2026-04-05 01:05:52.902602 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-05 01:05:52.902612 | orchestrator | Sunday 05 April 2026 01:03:28 +0000 (0:00:01.181) 0:00:11.472 ********** 2026-04-05 01:05:52.902631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.902654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.902667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.902679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902769 | orchestrator | 2026-04-05 01:05:52.902780 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-05 01:05:52.902791 | orchestrator | Sunday 05 April 2026 01:03:32 +0000 (0:00:03.643) 0:00:15.115 ********** 2026-04-05 01:05:52.902802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.902821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.902844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.902856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.902868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.902880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.902898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.902942 | orchestrator | 2026-04-05 01:05:52.902953 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-05 01:05:52.902965 | orchestrator | Sunday 05 April 2026 01:03:38 +0000 (0:00:06.228) 0:00:21.343 ********** 2026-04-05 01:05:52.902976 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:52.902987 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:52.902997 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:52.903008 | orchestrator | 2026-04-05 01:05:52.903019 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-05 01:05:52.903029 | orchestrator | Sunday 05 April 2026 01:03:39 +0000 (0:00:01.429) 0:00:22.773 ********** 2026-04-05 01:05:52.903040 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.903051 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.903062 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.903097 | orchestrator | 2026-04-05 01:05:52.903108 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-05 01:05:52.903119 | orchestrator | Sunday 05 April 2026 01:03:40 +0000 (0:00:01.016) 0:00:23.789 ********** 2026-04-05 01:05:52.903130 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.903140 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.903151 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.903203 | orchestrator | 2026-04-05 01:05:52.903215 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-05 01:05:52.903226 | orchestrator | Sunday 05 April 2026 01:03:41 +0000 (0:00:00.335) 0:00:24.124 ********** 2026-04-05 01:05:52.903236 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.903247 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.903258 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.903269 | orchestrator | 2026-04-05 01:05:52.903280 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-05 01:05:52.903291 | orchestrator | Sunday 05 April 2026 01:03:41 +0000 (0:00:00.319) 0:00:24.444 ********** 2026-04-05 01:05:52.903303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.903336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.903354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.903367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.903379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.903390 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.903401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.903420 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.903438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.903450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.903466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.903478 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.903489 | orchestrator | 2026-04-05 01:05:52.903499 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:05:52.903510 | orchestrator | Sunday 05 April 2026 01:03:42 +0000 (0:00:00.675) 0:00:25.120 ********** 2026-04-05 01:05:52.903521 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.903531 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.903542 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.903552 | orchestrator | 2026-04-05 01:05:52.903563 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-05 01:05:52.903573 | orchestrator | Sunday 05 April 2026 01:03:42 +0000 (0:00:00.539) 0:00:25.659 ********** 2026-04-05 01:05:52.903584 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 01:05:52.903595 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 01:05:52.903605 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 01:05:52.903616 | orchestrator | 2026-04-05 01:05:52.903626 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-05 01:05:52.903637 | orchestrator | Sunday 05 April 2026 01:03:44 +0000 (0:00:01.869) 0:00:27.529 ********** 2026-04-05 01:05:52.903648 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:05:52.903658 | orchestrator | 2026-04-05 01:05:52.903675 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-05 01:05:52.903686 | orchestrator | Sunday 05 April 2026 01:03:45 +0000 (0:00:01.034) 0:00:28.564 ********** 2026-04-05 01:05:52.903697 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.903708 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.903718 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.903729 | orchestrator | 2026-04-05 01:05:52.903740 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-05 01:05:52.903750 | orchestrator | Sunday 05 April 2026 01:03:46 +0000 (0:00:00.547) 0:00:29.111 ********** 2026-04-05 01:05:52.903761 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 01:05:52.903771 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:05:52.903782 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 01:05:52.903793 | orchestrator | 2026-04-05 01:05:52.903803 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-05 01:05:52.903814 | orchestrator | Sunday 05 April 2026 01:03:47 +0000 (0:00:01.588) 0:00:30.700 ********** 2026-04-05 01:05:52.903824 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:52.903835 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:52.903846 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:52.903862 | orchestrator | 2026-04-05 01:05:52.903881 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-05 01:05:52.903900 | orchestrator | Sunday 05 April 2026 01:03:48 +0000 (0:00:00.499) 0:00:31.199 ********** 2026-04-05 01:05:52.903919 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 01:05:52.903938 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 01:05:52.903952 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 01:05:52.903962 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 01:05:52.903973 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 01:05:52.903984 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 01:05:52.904001 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 01:05:52.904013 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 01:05:52.904024 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 01:05:52.904035 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 01:05:52.904045 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 01:05:52.904056 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 01:05:52.904104 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 01:05:52.904117 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 01:05:52.904128 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 01:05:52.904138 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:05:52.904155 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:05:52.904166 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:05:52.904177 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:05:52.904188 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:05:52.904206 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:05:52.904216 | orchestrator | 2026-04-05 01:05:52.904227 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-05 01:05:52.904237 | orchestrator | Sunday 05 April 2026 01:03:57 +0000 (0:00:09.139) 0:00:40.339 ********** 2026-04-05 01:05:52.904248 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:05:52.904258 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:05:52.904269 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:05:52.904280 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:05:52.904290 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:05:52.904301 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:05:52.904311 | orchestrator | 2026-04-05 01:05:52.904322 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-05 01:05:52.904333 | orchestrator | Sunday 05 April 2026 01:04:00 +0000 (0:00:02.777) 0:00:43.116 ********** 2026-04-05 01:05:52.904346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.904366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.904384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 01:05:52.904403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.904415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.904426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:05:52.904437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.904455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.904467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:05:52.904483 | orchestrator | 2026-04-05 01:05:52.904499 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-05 01:05:52.904510 | orchestrator | Sunday 05 April 2026 01:04:02 +0000 (0:00:02.543) 0:00:45.659 ********** 2026-04-05 01:05:52.904521 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:05:52.904532 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:52.904543 | orchestrator | } 2026-04-05 01:05:52.904554 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:05:52.904565 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:52.904575 | orchestrator | } 2026-04-05 01:05:52.904586 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:05:52.904596 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:52.904607 | orchestrator | } 2026-04-05 01:05:52.904618 | orchestrator | 2026-04-05 01:05:52.904628 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:05:52.904639 | orchestrator | Sunday 05 April 2026 01:04:03 +0000 (0:00:00.438) 0:00:46.098 ********** 2026-04-05 01:05:52.904651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.904663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.904674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.904686 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.904705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.904727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.904739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.904750 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.904762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 01:05:52.904774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:05:52.904791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:05:52.904808 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.904819 | orchestrator | 2026-04-05 01:05:52.904830 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:05:52.904841 | orchestrator | Sunday 05 April 2026 01:04:03 +0000 (0:00:00.679) 0:00:46.778 ********** 2026-04-05 01:05:52.904852 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.904862 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.904873 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.904883 | orchestrator | 2026-04-05 01:05:52.904894 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-05 01:05:52.904905 | orchestrator | Sunday 05 April 2026 01:04:04 +0000 (0:00:00.244) 0:00:47.022 ********** 2026-04-05 01:05:52.904915 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:52.904926 | orchestrator | 2026-04-05 01:05:52.904937 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-05 01:05:52.904948 | orchestrator | Sunday 05 April 2026 01:04:06 +0000 (0:00:02.294) 0:00:49.317 ********** 2026-04-05 01:05:52.904963 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:52.904974 | orchestrator | 2026-04-05 01:05:52.904985 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-05 01:05:52.904995 | orchestrator | Sunday 05 April 2026 01:04:08 +0000 (0:00:02.163) 0:00:51.481 ********** 2026-04-05 01:05:52.905006 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:52.905017 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:52.905028 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:52.905038 | orchestrator | 2026-04-05 01:05:52.905049 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-05 01:05:52.905060 | orchestrator | Sunday 05 April 2026 01:04:09 +0000 (0:00:01.223) 0:00:52.704 ********** 2026-04-05 01:05:52.905092 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:52.905104 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:52.905115 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:52.905125 | orchestrator | 2026-04-05 01:05:52.905137 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-05 01:05:52.905147 | orchestrator | Sunday 05 April 2026 01:04:10 +0000 (0:00:00.327) 0:00:53.032 ********** 2026-04-05 01:05:52.905158 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.905169 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.905180 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.905190 | orchestrator | 2026-04-05 01:05:52.905201 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-05 01:05:52.905212 | orchestrator | Sunday 05 April 2026 01:04:10 +0000 (0:00:00.294) 0:00:53.327 ********** 2026-04-05 01:05:52.905222 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:52.905233 | orchestrator | 2026-04-05 01:05:52.905244 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-05 01:05:52.905254 | orchestrator | Sunday 05 April 2026 01:04:26 +0000 (0:00:16.009) 0:01:09.337 ********** 2026-04-05 01:05:52.905265 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:52.905276 | orchestrator | 2026-04-05 01:05:52.905286 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 01:05:52.905297 | orchestrator | Sunday 05 April 2026 01:04:38 +0000 (0:00:12.054) 0:01:21.391 ********** 2026-04-05 01:05:52.905308 | orchestrator | 2026-04-05 01:05:52.905319 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 01:05:52.905329 | orchestrator | Sunday 05 April 2026 01:04:38 +0000 (0:00:00.063) 0:01:21.455 ********** 2026-04-05 01:05:52.905351 | orchestrator | 2026-04-05 01:05:52.905362 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 01:05:52.905373 | orchestrator | Sunday 05 April 2026 01:04:38 +0000 (0:00:00.119) 0:01:21.574 ********** 2026-04-05 01:05:52.905383 | orchestrator | 2026-04-05 01:05:52.905394 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-05 01:05:52.905404 | orchestrator | Sunday 05 April 2026 01:04:38 +0000 (0:00:00.261) 0:01:21.836 ********** 2026-04-05 01:05:52.905415 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:52.905426 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:52.905437 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:52.905447 | orchestrator | 2026-04-05 01:05:52.905462 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-05 01:05:52.905481 | orchestrator | Sunday 05 April 2026 01:04:58 +0000 (0:00:19.330) 0:01:41.166 ********** 2026-04-05 01:05:52.905497 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:52.905514 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:52.905531 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:52.905547 | orchestrator | 2026-04-05 01:05:52.905565 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-05 01:05:52.905583 | orchestrator | Sunday 05 April 2026 01:05:05 +0000 (0:00:07.551) 0:01:48.718 ********** 2026-04-05 01:05:52.905602 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:52.905621 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:52.905640 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:52.905659 | orchestrator | 2026-04-05 01:05:52.905671 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:05:52.905682 | orchestrator | Sunday 05 April 2026 01:05:18 +0000 (0:00:12.431) 0:02:01.149 ********** 2026-04-05 01:05:52.905693 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:05:52.905723 | orchestrator | 2026-04-05 01:05:52.905735 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-05 01:05:52.905746 | orchestrator | Sunday 05 April 2026 01:05:18 +0000 (0:00:00.649) 0:02:01.798 ********** 2026-04-05 01:05:52.905756 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:52.905775 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:52.905787 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:52.905798 | orchestrator | 2026-04-05 01:05:52.905808 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-05 01:05:52.905825 | orchestrator | Sunday 05 April 2026 01:05:19 +0000 (0:00:00.702) 0:02:02.501 ********** 2026-04-05 01:05:52.905843 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:52.905862 | orchestrator | 2026-04-05 01:05:52.905889 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-05 01:05:52.905907 | orchestrator | Sunday 05 April 2026 01:05:21 +0000 (0:00:01.635) 0:02:04.136 ********** 2026-04-05 01:05:52.905925 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-05 01:05:52.905942 | orchestrator | 2026-04-05 01:05:52.905958 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting services] ************* 2026-04-05 01:05:52.905976 | orchestrator | Sunday 05 April 2026 01:05:34 +0000 (0:00:13.577) 0:02:17.714 ********** 2026-04-05 01:05:52.905992 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-05 01:05:52.906009 | orchestrator | 2026-04-05 01:05:52.906110 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting endpoints] ************ 2026-04-05 01:05:52.906129 | orchestrator | Sunday 05 April 2026 01:05:38 +0000 (0:00:03.614) 0:02:21.328 ********** 2026-04-05 01:05:52.906146 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-05 01:05:52.906174 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-05 01:05:52.906193 | orchestrator | 2026-04-05 01:05:52.906210 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-05 01:05:52.906242 | orchestrator | Sunday 05 April 2026 01:05:45 +0000 (0:00:06.948) 0:02:28.277 ********** 2026-04-05 01:05:52.906261 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.906278 | orchestrator | 2026-04-05 01:05:52.906295 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-05 01:05:52.906311 | orchestrator | Sunday 05 April 2026 01:05:45 +0000 (0:00:00.177) 0:02:28.454 ********** 2026-04-05 01:05:52.906329 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.906347 | orchestrator | 2026-04-05 01:05:52.906365 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-05 01:05:52.906382 | orchestrator | Sunday 05 April 2026 01:05:45 +0000 (0:00:00.171) 0:02:28.625 ********** 2026-04-05 01:05:52.906399 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.906418 | orchestrator | 2026-04-05 01:05:52.906437 | orchestrator | TASK [service-ks-register : keystone | Granting/revoking user roles] *********** 2026-04-05 01:05:52.906455 | orchestrator | Sunday 05 April 2026 01:05:46 +0000 (0:00:00.414) 0:02:29.040 ********** 2026-04-05 01:05:52.906474 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.906495 | orchestrator | 2026-04-05 01:05:52.906512 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-05 01:05:52.906531 | orchestrator | Sunday 05 April 2026 01:05:46 +0000 (0:00:00.402) 0:02:29.443 ********** 2026-04-05 01:05:52.906550 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:52.906567 | orchestrator | 2026-04-05 01:05:52.906585 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:05:52.906604 | orchestrator | Sunday 05 April 2026 01:05:49 +0000 (0:00:03.473) 0:02:32.916 ********** 2026-04-05 01:05:52.906621 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:52.906640 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:52.906659 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:52.906679 | orchestrator | 2026-04-05 01:05:52.906697 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:05:52.906717 | orchestrator | testbed-node-0 : ok=34  changed=20  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-05 01:05:52.906733 | orchestrator | testbed-node-1 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 01:05:52.906744 | orchestrator | testbed-node-2 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 01:05:52.906755 | orchestrator | 2026-04-05 01:05:52.906766 | orchestrator | 2026-04-05 01:05:52.906776 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:05:52.906787 | orchestrator | Sunday 05 April 2026 01:05:50 +0000 (0:00:00.390) 0:02:33.306 ********** 2026-04-05 01:05:52.906798 | orchestrator | =============================================================================== 2026-04-05 01:05:52.906808 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.33s 2026-04-05 01:05:52.906819 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.01s 2026-04-05 01:05:52.906830 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.58s 2026-04-05 01:05:52.906840 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.43s 2026-04-05 01:05:52.906851 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.05s 2026-04-05 01:05:52.906861 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.14s 2026-04-05 01:05:52.906872 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.55s 2026-04-05 01:05:52.906882 | orchestrator | service-ks-register : keystone | Creating/deleting endpoints ------------ 6.95s 2026-04-05 01:05:52.906893 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.23s 2026-04-05 01:05:52.906903 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.64s 2026-04-05 01:05:52.906914 | orchestrator | service-ks-register : keystone | Creating/deleting services ------------- 3.61s 2026-04-05 01:05:52.906950 | orchestrator | keystone : Creating default user role ----------------------------------- 3.47s 2026-04-05 01:05:52.906962 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.35s 2026-04-05 01:05:52.906972 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.78s 2026-04-05 01:05:52.906983 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.54s 2026-04-05 01:05:52.906993 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.52s 2026-04-05 01:05:52.907004 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.29s 2026-04-05 01:05:52.907014 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.16s 2026-04-05 01:05:52.907025 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.87s 2026-04-05 01:05:52.907036 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.64s 2026-04-05 01:05:52.907047 | orchestrator | 2026-04-05 01:05:52 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:05:52.907058 | orchestrator | 2026-04-05 01:05:52 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:52.908487 | orchestrator | 2026-04-05 01:05:52 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:05:52.908570 | orchestrator | 2026-04-05 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:55.944696 | orchestrator | 2026-04-05 01:05:55 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:05:55.945121 | orchestrator | 2026-04-05 01:05:55 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:05:55.945610 | orchestrator | 2026-04-05 01:05:55 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:55.946310 | orchestrator | 2026-04-05 01:05:55 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:05:55.946338 | orchestrator | 2026-04-05 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:58.992021 | orchestrator | 2026-04-05 01:05:58 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:05:58.993585 | orchestrator | 2026-04-05 01:05:58 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:05:58.995415 | orchestrator | 2026-04-05 01:05:58 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:05:58.998137 | orchestrator | 2026-04-05 01:05:58 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:05:58.998187 | orchestrator | 2026-04-05 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:02.041407 | orchestrator | 2026-04-05 01:06:02 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state STARTED 2026-04-05 01:06:02.041530 | orchestrator | 2026-04-05 01:06:02 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:02.043962 | orchestrator | 2026-04-05 01:06:02 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:02.045803 | orchestrator | 2026-04-05 01:06:02 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:02.045835 | orchestrator | 2026-04-05 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:05.083634 | orchestrator | 2026-04-05 01:06:05 | INFO  | Task c5da41c6-f339-4e68-bc33-6435a46ec6cc is in state SUCCESS 2026-04-05 01:06:05.085542 | orchestrator | 2026-04-05 01:06:05 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:05.086555 | orchestrator | 2026-04-05 01:06:05 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:05.089451 | orchestrator | 2026-04-05 01:06:05 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:05.089529 | orchestrator | 2026-04-05 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:08.147457 | orchestrator | 2026-04-05 01:06:08 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:08.147931 | orchestrator | 2026-04-05 01:06:08 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:08.149126 | orchestrator | 2026-04-05 01:06:08 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:08.149513 | orchestrator | 2026-04-05 01:06:08 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:08.149535 | orchestrator | 2026-04-05 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:11.199467 | orchestrator | 2026-04-05 01:06:11 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:11.201420 | orchestrator | 2026-04-05 01:06:11 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:11.201752 | orchestrator | 2026-04-05 01:06:11 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:11.202921 | orchestrator | 2026-04-05 01:06:11 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:11.202954 | orchestrator | 2026-04-05 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:14.243828 | orchestrator | 2026-04-05 01:06:14 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:14.243962 | orchestrator | 2026-04-05 01:06:14 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:14.245015 | orchestrator | 2026-04-05 01:06:14 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:14.247754 | orchestrator | 2026-04-05 01:06:14 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:14.247841 | orchestrator | 2026-04-05 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:17.281887 | orchestrator | 2026-04-05 01:06:17 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:17.282487 | orchestrator | 2026-04-05 01:06:17 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:17.283704 | orchestrator | 2026-04-05 01:06:17 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:17.285311 | orchestrator | 2026-04-05 01:06:17 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:17.285350 | orchestrator | 2026-04-05 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:20.349906 | orchestrator | 2026-04-05 01:06:20 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:20.352307 | orchestrator | 2026-04-05 01:06:20 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:20.353181 | orchestrator | 2026-04-05 01:06:20 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:20.355226 | orchestrator | 2026-04-05 01:06:20 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:20.355306 | orchestrator | 2026-04-05 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:23.397065 | orchestrator | 2026-04-05 01:06:23 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:23.398547 | orchestrator | 2026-04-05 01:06:23 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:23.399573 | orchestrator | 2026-04-05 01:06:23 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:23.400952 | orchestrator | 2026-04-05 01:06:23 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:23.401004 | orchestrator | 2026-04-05 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:26.430526 | orchestrator | 2026-04-05 01:06:26 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:26.430782 | orchestrator | 2026-04-05 01:06:26 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:26.431596 | orchestrator | 2026-04-05 01:06:26 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:26.432902 | orchestrator | 2026-04-05 01:06:26 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:26.432944 | orchestrator | 2026-04-05 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:29.478096 | orchestrator | 2026-04-05 01:06:29 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:29.485737 | orchestrator | 2026-04-05 01:06:29 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:29.485874 | orchestrator | 2026-04-05 01:06:29 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:29.486509 | orchestrator | 2026-04-05 01:06:29 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:29.486537 | orchestrator | 2026-04-05 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:32.666397 | orchestrator | 2026-04-05 01:06:32 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:32.666518 | orchestrator | 2026-04-05 01:06:32 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:32.668726 | orchestrator | 2026-04-05 01:06:32 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:32.671624 | orchestrator | 2026-04-05 01:06:32 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:32.671680 | orchestrator | 2026-04-05 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:35.714620 | orchestrator | 2026-04-05 01:06:35 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:35.714722 | orchestrator | 2026-04-05 01:06:35 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:35.717061 | orchestrator | 2026-04-05 01:06:35 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:35.717330 | orchestrator | 2026-04-05 01:06:35 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:35.717360 | orchestrator | 2026-04-05 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:38.765908 | orchestrator | 2026-04-05 01:06:38 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:38.768301 | orchestrator | 2026-04-05 01:06:38 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:38.770440 | orchestrator | 2026-04-05 01:06:38 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:38.772310 | orchestrator | 2026-04-05 01:06:38 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:38.772414 | orchestrator | 2026-04-05 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:41.803641 | orchestrator | 2026-04-05 01:06:41 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:41.804315 | orchestrator | 2026-04-05 01:06:41 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:41.805470 | orchestrator | 2026-04-05 01:06:41 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:41.806480 | orchestrator | 2026-04-05 01:06:41 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:41.806533 | orchestrator | 2026-04-05 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:44.857272 | orchestrator | 2026-04-05 01:06:44 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:44.859805 | orchestrator | 2026-04-05 01:06:44 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:44.862722 | orchestrator | 2026-04-05 01:06:44 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:44.863844 | orchestrator | 2026-04-05 01:06:44 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:44.864105 | orchestrator | 2026-04-05 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:47.898372 | orchestrator | 2026-04-05 01:06:47 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:47.898587 | orchestrator | 2026-04-05 01:06:47 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:47.899081 | orchestrator | 2026-04-05 01:06:47 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:47.902410 | orchestrator | 2026-04-05 01:06:47 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:47.902516 | orchestrator | 2026-04-05 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:50.940850 | orchestrator | 2026-04-05 01:06:50 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:50.943372 | orchestrator | 2026-04-05 01:06:50 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:50.945561 | orchestrator | 2026-04-05 01:06:50 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:50.947824 | orchestrator | 2026-04-05 01:06:50 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:50.947891 | orchestrator | 2026-04-05 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:53.976310 | orchestrator | 2026-04-05 01:06:53 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:53.976479 | orchestrator | 2026-04-05 01:06:53 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:53.977287 | orchestrator | 2026-04-05 01:06:53 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:53.977908 | orchestrator | 2026-04-05 01:06:53 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:53.977932 | orchestrator | 2026-04-05 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:57.001494 | orchestrator | 2026-04-05 01:06:56 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:06:57.002146 | orchestrator | 2026-04-05 01:06:57 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:06:57.003671 | orchestrator | 2026-04-05 01:06:57 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:06:57.006305 | orchestrator | 2026-04-05 01:06:57 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:06:57.006377 | orchestrator | 2026-04-05 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:00.083501 | orchestrator | 2026-04-05 01:07:00 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:00.083899 | orchestrator | 2026-04-05 01:07:00 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:00.084603 | orchestrator | 2026-04-05 01:07:00 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:07:00.086951 | orchestrator | 2026-04-05 01:07:00 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:00.087018 | orchestrator | 2026-04-05 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:03.117341 | orchestrator | 2026-04-05 01:07:03 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:03.117862 | orchestrator | 2026-04-05 01:07:03 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:03.118835 | orchestrator | 2026-04-05 01:07:03 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:07:03.119744 | orchestrator | 2026-04-05 01:07:03 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:03.119811 | orchestrator | 2026-04-05 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:06.144024 | orchestrator | 2026-04-05 01:07:06 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:06.144737 | orchestrator | 2026-04-05 01:07:06 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:06.145369 | orchestrator | 2026-04-05 01:07:06 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:07:06.146136 | orchestrator | 2026-04-05 01:07:06 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:06.146319 | orchestrator | 2026-04-05 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:09.191083 | orchestrator | 2026-04-05 01:07:09 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:09.192018 | orchestrator | 2026-04-05 01:07:09 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:09.193549 | orchestrator | 2026-04-05 01:07:09 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state STARTED 2026-04-05 01:07:09.195012 | orchestrator | 2026-04-05 01:07:09 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:09.195066 | orchestrator | 2026-04-05 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:12.248599 | orchestrator | 2026-04-05 01:07:12 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:12.249250 | orchestrator | 2026-04-05 01:07:12 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:12.252853 | orchestrator | 2026-04-05 01:07:12.252922 | orchestrator | 2026-04-05 01:07:12.252942 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:07:12.252958 | orchestrator | 2026-04-05 01:07:12.252974 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:07:12.252990 | orchestrator | Sunday 05 April 2026 01:05:29 +0000 (0:00:00.307) 0:00:00.307 ********** 2026-04-05 01:07:12.253044 | orchestrator | ok: [testbed-manager] 2026-04-05 01:07:12.253095 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:07:12.253125 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:07:12.253134 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:07:12.253143 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:07:12.253151 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:07:12.253160 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:07:12.253168 | orchestrator | 2026-04-05 01:07:12.253177 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:07:12.253186 | orchestrator | Sunday 05 April 2026 01:05:30 +0000 (0:00:00.980) 0:00:01.288 ********** 2026-04-05 01:07:12.253222 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-05 01:07:12.253231 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-05 01:07:12.253241 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-05 01:07:12.253249 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-05 01:07:12.253299 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-05 01:07:12.253308 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-05 01:07:12.253316 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-05 01:07:12.253325 | orchestrator | 2026-04-05 01:07:12.253334 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-05 01:07:12.253343 | orchestrator | 2026-04-05 01:07:12.253352 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-05 01:07:12.253361 | orchestrator | Sunday 05 April 2026 01:05:31 +0000 (0:00:00.818) 0:00:02.106 ********** 2026-04-05 01:07:12.253370 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:07:12.253379 | orchestrator | 2026-04-05 01:07:12.253388 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting services] ************* 2026-04-05 01:07:12.253397 | orchestrator | Sunday 05 April 2026 01:05:32 +0000 (0:00:01.218) 0:00:03.325 ********** 2026-04-05 01:07:12.253416 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-04-05 01:07:12.253426 | orchestrator | 2026-04-05 01:07:12.253437 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting endpoints] ************ 2026-04-05 01:07:12.253446 | orchestrator | Sunday 05 April 2026 01:05:36 +0000 (0:00:04.027) 0:00:07.353 ********** 2026-04-05 01:07:12.253456 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-05 01:07:12.253536 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-05 01:07:12.253548 | orchestrator | 2026-04-05 01:07:12.253559 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-05 01:07:12.253569 | orchestrator | Sunday 05 April 2026 01:05:45 +0000 (0:00:08.521) 0:00:15.874 ********** 2026-04-05 01:07:12.253580 | orchestrator | ok: [testbed-manager] => (item=service) 2026-04-05 01:07:12.253590 | orchestrator | 2026-04-05 01:07:12.253600 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-05 01:07:12.253610 | orchestrator | Sunday 05 April 2026 01:05:48 +0000 (0:00:03.514) 0:00:19.389 ********** 2026-04-05 01:07:12.253620 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-04-05 01:07:12.253630 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:07:12.253641 | orchestrator | 2026-04-05 01:07:12.253652 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-05 01:07:12.253662 | orchestrator | Sunday 05 April 2026 01:05:52 +0000 (0:00:03.715) 0:00:23.104 ********** 2026-04-05 01:07:12.253678 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-04-05 01:07:12.253735 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-04-05 01:07:12.253751 | orchestrator | 2026-04-05 01:07:12.253766 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting/revoking user roles] *********** 2026-04-05 01:07:12.253781 | orchestrator | Sunday 05 April 2026 01:05:58 +0000 (0:00:05.950) 0:00:29.054 ********** 2026-04-05 01:07:12.253808 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-04-05 01:07:12.253866 | orchestrator | 2026-04-05 01:07:12.253958 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:07:12.254179 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:07:12.254250 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:07:12.254267 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:07:12.254283 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:07:12.254298 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:07:12.254331 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:07:12.254348 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:07:12.254362 | orchestrator | 2026-04-05 01:07:12.254377 | orchestrator | 2026-04-05 01:07:12.254391 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:07:12.254408 | orchestrator | Sunday 05 April 2026 01:06:03 +0000 (0:00:05.230) 0:00:34.284 ********** 2026-04-05 01:07:12.254423 | orchestrator | =============================================================================== 2026-04-05 01:07:12.254438 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting endpoints ------------ 8.52s 2026-04-05 01:07:12.254447 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.95s 2026-04-05 01:07:12.254456 | orchestrator | service-ks-register : ceph-rgw | Granting/revoking user roles ----------- 5.23s 2026-04-05 01:07:12.254465 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting services ------------- 4.03s 2026-04-05 01:07:12.254473 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.72s 2026-04-05 01:07:12.254484 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.51s 2026-04-05 01:07:12.254499 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.22s 2026-04-05 01:07:12.254514 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.98s 2026-04-05 01:07:12.254528 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-04-05 01:07:12.254544 | orchestrator | 2026-04-05 01:07:12.254560 | orchestrator | 2026-04-05 01:07:12.254576 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:07:12.254591 | orchestrator | 2026-04-05 01:07:12.254606 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:07:12.254622 | orchestrator | Sunday 05 April 2026 01:04:08 +0000 (0:00:00.299) 0:00:00.299 ********** 2026-04-05 01:07:12.254637 | orchestrator | ok: [testbed-manager] 2026-04-05 01:07:12.254652 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:07:12.254667 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:07:12.254683 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:07:12.254699 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:07:12.254713 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:07:12.254725 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:07:12.254733 | orchestrator | 2026-04-05 01:07:12.254751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:07:12.254760 | orchestrator | Sunday 05 April 2026 01:04:09 +0000 (0:00:00.856) 0:00:01.155 ********** 2026-04-05 01:07:12.254773 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-05 01:07:12.254801 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-05 01:07:12.254817 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-05 01:07:12.254833 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-05 01:07:12.254848 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-05 01:07:12.254863 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-05 01:07:12.254878 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-05 01:07:12.254893 | orchestrator | 2026-04-05 01:07:12.254909 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-05 01:07:12.254924 | orchestrator | 2026-04-05 01:07:12.254939 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-05 01:07:12.254954 | orchestrator | Sunday 05 April 2026 01:04:10 +0000 (0:00:00.973) 0:00:02.128 ********** 2026-04-05 01:07:12.254990 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:07:12.255006 | orchestrator | 2026-04-05 01:07:12.255022 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-05 01:07:12.255036 | orchestrator | Sunday 05 April 2026 01:04:11 +0000 (0:00:01.194) 0:00:03.323 ********** 2026-04-05 01:07:12.255052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255087 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 01:07:12.255100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255110 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255247 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255373 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:07:12.255388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255431 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255461 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255476 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255604 | orchestrator | 2026-04-05 01:07:12.255620 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-05 01:07:12.255635 | orchestrator | Sunday 05 April 2026 01:04:16 +0000 (0:00:04.966) 0:00:08.290 ********** 2026-04-05 01:07:12.255651 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:07:12.255666 | orchestrator | 2026-04-05 01:07:12.255680 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-05 01:07:12.255695 | orchestrator | Sunday 05 April 2026 01:04:18 +0000 (0:00:02.129) 0:00:10.420 ********** 2026-04-05 01:07:12.255721 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 01:07:12.255740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255809 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255840 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255854 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.255869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.255931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255951 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255982 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.255996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.256020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro2026-04-05 01:07:12 | INFO  | Task 38bc2861-86e5-45d7-9ecc-6a9916d6989f is in state SUCCESS 2026-04-05 01:07:12.256043 | orchestrator | ', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.256059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.256074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.256090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.256110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.256126 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:07:12.256142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.256173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.256188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.256256 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.256278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.256294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.256310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.256325 | orchestrator | 2026-04-05 01:07:12.256339 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-05 01:07:12.256353 | orchestrator | Sunday 05 April 2026 01:04:24 +0000 (0:00:06.298) 0:00:16.718 ********** 2026-04-05 01:07:12.256377 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 01:07:12.256403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.256418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.256432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.256476 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.256494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.256511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.256526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.256551 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.256577 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.256593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.256610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.256633 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:07:12.256651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.256668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.256692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.256715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.256732 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.256749 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:07:12.256765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.256788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.256804 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.256819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.256834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.256858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.256874 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.256889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.256916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.256932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.256949 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.256964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.256987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257003 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.257018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257066 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.257081 | orchestrator | 2026-04-05 01:07:12.257097 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-05 01:07:12.257113 | orchestrator | Sunday 05 April 2026 01:04:27 +0000 (0:00:02.224) 0:00:18.943 ********** 2026-04-05 01:07:12.257137 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 01:07:12.257148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.257157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.257170 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.257180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.257223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.257235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.257244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.257259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257269 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.257291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.257300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.257316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.257325 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.257335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:07:12.257361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.257379 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.257392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.257406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.257416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.257434 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.257448 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.257457 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:07:12.257469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257508 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.257524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257556 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.257573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.257688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.257721 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.257734 | orchestrator | 2026-04-05 01:07:12.257749 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-05 01:07:12.257764 | orchestrator | Sunday 05 April 2026 01:04:29 +0000 (0:00:02.919) 0:00:21.862 ********** 2026-04-05 01:07:12.257780 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 01:07:12.257796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.257811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.257842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.257857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.257879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.257894 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.257908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.257922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.257937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.257951 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.257980 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.257995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.258060 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.258083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.258116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.258131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.258147 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:07:12.258175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.258185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.258219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.258228 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.258237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.258245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.258261 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.258270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.258282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.258294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.258303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.258311 | orchestrator | 2026-04-05 01:07:12.258319 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-05 01:07:12.258327 | orchestrator | Sunday 05 April 2026 01:04:36 +0000 (0:00:06.553) 0:00:28.416 ********** 2026-04-05 01:07:12.258338 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:07:12.258352 | orchestrator | 2026-04-05 01:07:12.258364 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-05 01:07:12.258377 | orchestrator | Sunday 05 April 2026 01:04:37 +0000 (0:00:00.873) 0:00:29.289 ********** 2026-04-05 01:07:12.258390 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:07:12.258403 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.258416 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.258429 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.258442 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.258454 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.258466 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.258478 | orchestrator | 2026-04-05 01:07:12.258491 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-05 01:07:12.258505 | orchestrator | Sunday 05 April 2026 01:04:38 +0000 (0:00:00.726) 0:00:30.016 ********** 2026-04-05 01:07:12.258517 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:07:12.258538 | orchestrator | 2026-04-05 01:07:12.258552 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-05 01:07:12.258565 | orchestrator | Sunday 05 April 2026 01:04:38 +0000 (0:00:00.865) 0:00:30.882 ********** 2026-04-05 01:07:12.258580 | orchestrator | [WARNING]: Skipped 2026-04-05 01:07:12.258595 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258609 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-05 01:07:12.258623 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258636 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-05 01:07:12.258650 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:07:12.258665 | orchestrator | [WARNING]: Skipped 2026-04-05 01:07:12.258677 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258689 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-05 01:07:12.258696 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258704 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-05 01:07:12.258712 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 01:07:12.258720 | orchestrator | [WARNING]: Skipped 2026-04-05 01:07:12.258728 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258736 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-05 01:07:12.258743 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258751 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-05 01:07:12.258759 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 01:07:12.258766 | orchestrator | [WARNING]: Skipped 2026-04-05 01:07:12.258774 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258782 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-05 01:07:12.258790 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258797 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-05 01:07:12.258805 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:07:12.258813 | orchestrator | [WARNING]: Skipped 2026-04-05 01:07:12.258821 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258834 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-05 01:07:12.258843 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258850 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-05 01:07:12.258858 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 01:07:12.258866 | orchestrator | [WARNING]: Skipped 2026-04-05 01:07:12.258874 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258882 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-05 01:07:12.258889 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258897 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-05 01:07:12.258905 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 01:07:12.258913 | orchestrator | [WARNING]: Skipped 2026-04-05 01:07:12.258925 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258939 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-05 01:07:12.258951 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:07:12.258964 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-05 01:07:12.258986 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 01:07:12.259008 | orchestrator | 2026-04-05 01:07:12.259016 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-05 01:07:12.259024 | orchestrator | Sunday 05 April 2026 01:04:41 +0000 (0:00:02.589) 0:00:33.472 ********** 2026-04-05 01:07:12.259032 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:07:12.259040 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.259048 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:07:12.259056 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.259064 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:07:12.259072 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.259079 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:07:12.259087 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.259095 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:07:12.259103 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.259111 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:07:12.259118 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.259130 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-05 01:07:12.259143 | orchestrator | 2026-04-05 01:07:12.259157 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-05 01:07:12.259170 | orchestrator | Sunday 05 April 2026 01:04:56 +0000 (0:00:15.057) 0:00:48.529 ********** 2026-04-05 01:07:12.259183 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:07:12.259347 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:07:12.259363 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.259372 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.259379 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:07:12.259387 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.259395 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:07:12.259403 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.259411 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:07:12.259419 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.259427 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:07:12.259435 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.259443 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-05 01:07:12.259451 | orchestrator | 2026-04-05 01:07:12.259459 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-05 01:07:12.259467 | orchestrator | Sunday 05 April 2026 01:04:59 +0000 (0:00:03.187) 0:00:51.716 ********** 2026-04-05 01:07:12.259475 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:07:12.259485 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.259493 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:07:12.259501 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:07:12.259509 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.259517 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.259533 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:07:12.259547 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.259555 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:07:12.259563 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.259571 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:07:12.259579 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.259587 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-05 01:07:12.259595 | orchestrator | 2026-04-05 01:07:12.259603 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-05 01:07:12.259610 | orchestrator | Sunday 05 April 2026 01:05:01 +0000 (0:00:01.644) 0:00:53.361 ********** 2026-04-05 01:07:12.259618 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:07:12.259626 | orchestrator | 2026-04-05 01:07:12.259634 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-05 01:07:12.259642 | orchestrator | Sunday 05 April 2026 01:05:02 +0000 (0:00:00.824) 0:00:54.186 ********** 2026-04-05 01:07:12.259650 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:07:12.259658 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.259675 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.259683 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.259691 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.259699 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.259707 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.259715 | orchestrator | 2026-04-05 01:07:12.259723 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-05 01:07:12.259731 | orchestrator | Sunday 05 April 2026 01:05:02 +0000 (0:00:00.719) 0:00:54.905 ********** 2026-04-05 01:07:12.259739 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:07:12.259747 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.259754 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.259762 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.259770 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:12.259778 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:07:12.259786 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:07:12.259794 | orchestrator | 2026-04-05 01:07:12.259801 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-05 01:07:12.259809 | orchestrator | Sunday 05 April 2026 01:05:04 +0000 (0:00:01.998) 0:00:56.904 ********** 2026-04-05 01:07:12.259817 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:07:12.259825 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:07:12.259833 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:07:12.259841 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.259849 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:07:12.259857 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.259865 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:07:12.259873 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.259880 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:07:12.259888 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.259896 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:07:12.259904 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.259924 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:07:12.259933 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.259941 | orchestrator | 2026-04-05 01:07:12.259949 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-05 01:07:12.259957 | orchestrator | Sunday 05 April 2026 01:05:07 +0000 (0:00:02.258) 0:00:59.162 ********** 2026-04-05 01:07:12.259965 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:07:12.259973 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.259981 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:07:12.259989 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.259997 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:07:12.260004 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.260012 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-05 01:07:12.260020 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:07:12.260028 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.260036 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:07:12.260044 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.260052 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:07:12.260060 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.260067 | orchestrator | 2026-04-05 01:07:12.260075 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-05 01:07:12.260088 | orchestrator | Sunday 05 April 2026 01:05:08 +0000 (0:00:01.714) 0:01:00.876 ********** 2026-04-05 01:07:12.260096 | orchestrator | [WARNING]: Skipped 2026-04-05 01:07:12.260104 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-05 01:07:12.260112 | orchestrator | due to this access issue: 2026-04-05 01:07:12.260120 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-05 01:07:12.260128 | orchestrator | not a directory 2026-04-05 01:07:12.260135 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:07:12.260143 | orchestrator | 2026-04-05 01:07:12.260151 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-05 01:07:12.260159 | orchestrator | Sunday 05 April 2026 01:05:10 +0000 (0:00:01.107) 0:01:01.984 ********** 2026-04-05 01:07:12.260167 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:07:12.260175 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.260183 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.260207 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.260215 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.260223 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.260231 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.260239 | orchestrator | 2026-04-05 01:07:12.260247 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-05 01:07:12.260255 | orchestrator | Sunday 05 April 2026 01:05:10 +0000 (0:00:00.649) 0:01:02.633 ********** 2026-04-05 01:07:12.260263 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:07:12.260275 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.260283 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.260291 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.260299 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.260307 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.260315 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.260328 | orchestrator | 2026-04-05 01:07:12.260337 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-05 01:07:12.260345 | orchestrator | Sunday 05 April 2026 01:05:11 +0000 (0:00:00.729) 0:01:03.363 ********** 2026-04-05 01:07:12.260354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.260364 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 01:07:12.260374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.260382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.260394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.260403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.260417 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.260440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:07:12.260449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.260457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.260466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.260475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.260487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.260496 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.260514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.260523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.260531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.260540 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.260548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.260556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.260565 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:07:12.260582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.260591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.260625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.260635 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.260643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:07:12.260655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.260664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.260681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:07:12.260690 | orchestrator | 2026-04-05 01:07:12.260698 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-05 01:07:12.260706 | orchestrator | Sunday 05 April 2026 01:05:15 +0000 (0:00:04.324) 0:01:07.687 ********** 2026-04-05 01:07:12.260714 | orchestrator | changed: [testbed-manager] => { 2026-04-05 01:07:12.260723 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:07:12.260731 | orchestrator | } 2026-04-05 01:07:12.260739 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:07:12.260747 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:07:12.260755 | orchestrator | } 2026-04-05 01:07:12.260763 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:07:12.260771 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:07:12.260779 | orchestrator | } 2026-04-05 01:07:12.260786 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:07:12.260794 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:07:12.260802 | orchestrator | } 2026-04-05 01:07:12.260810 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 01:07:12.260818 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:07:12.260826 | orchestrator | } 2026-04-05 01:07:12.260834 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 01:07:12.260842 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:07:12.260850 | orchestrator | } 2026-04-05 01:07:12.260858 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 01:07:12.260866 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:07:12.260885 | orchestrator | } 2026-04-05 01:07:12.260900 | orchestrator | 2026-04-05 01:07:12.260909 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:07:12.260916 | orchestrator | Sunday 05 April 2026 01:05:16 +0000 (0:00:00.656) 0:01:08.344 ********** 2026-04-05 01:07:12.260925 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 01:07:12.260935 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.260952 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.260967 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:07:12.260976 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.260985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.260993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.261001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.261009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.261026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.261034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.261048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.261056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.261065 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:07:12.261073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.261081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.261089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.261101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.261113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.261121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.261133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:07:12.261142 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:12.261150 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:12.261158 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:12.261166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.261174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.261183 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.261224 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:07:12.261233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.261242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.261254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.261263 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:07:12.261275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:07:12.261284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.261292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:07:12.261300 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:07:12.261308 | orchestrator | 2026-04-05 01:07:12.261316 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-05 01:07:12.261324 | orchestrator | Sunday 05 April 2026 01:05:18 +0000 (0:00:01.768) 0:01:10.113 ********** 2026-04-05 01:07:12.261332 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 01:07:12.261340 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:07:12.261354 | orchestrator | 2026-04-05 01:07:12.261363 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:07:12.261370 | orchestrator | Sunday 05 April 2026 01:05:19 +0000 (0:00:01.028) 0:01:11.142 ********** 2026-04-05 01:07:12.261378 | orchestrator | 2026-04-05 01:07:12.261386 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:07:12.261394 | orchestrator | Sunday 05 April 2026 01:05:19 +0000 (0:00:00.198) 0:01:11.341 ********** 2026-04-05 01:07:12.261401 | orchestrator | 2026-04-05 01:07:12.261409 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:07:12.261417 | orchestrator | Sunday 05 April 2026 01:05:19 +0000 (0:00:00.059) 0:01:11.400 ********** 2026-04-05 01:07:12.261425 | orchestrator | 2026-04-05 01:07:12.261433 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:07:12.261440 | orchestrator | Sunday 05 April 2026 01:05:19 +0000 (0:00:00.058) 0:01:11.459 ********** 2026-04-05 01:07:12.261448 | orchestrator | 2026-04-05 01:07:12.261456 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:07:12.261465 | orchestrator | Sunday 05 April 2026 01:05:19 +0000 (0:00:00.059) 0:01:11.518 ********** 2026-04-05 01:07:12.261472 | orchestrator | 2026-04-05 01:07:12.261480 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:07:12.261488 | orchestrator | Sunday 05 April 2026 01:05:19 +0000 (0:00:00.056) 0:01:11.575 ********** 2026-04-05 01:07:12.261496 | orchestrator | 2026-04-05 01:07:12.261504 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:07:12.261511 | orchestrator | Sunday 05 April 2026 01:05:19 +0000 (0:00:00.060) 0:01:11.635 ********** 2026-04-05 01:07:12.261519 | orchestrator | 2026-04-05 01:07:12.261527 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-05 01:07:12.261535 | orchestrator | Sunday 05 April 2026 01:05:19 +0000 (0:00:00.082) 0:01:11.718 ********** 2026-04-05 01:07:12.261543 | orchestrator | changed: [testbed-manager] 2026-04-05 01:07:12.261550 | orchestrator | 2026-04-05 01:07:12.261558 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-05 01:07:12.261566 | orchestrator | Sunday 05 April 2026 01:05:38 +0000 (0:00:18.689) 0:01:30.407 ********** 2026-04-05 01:07:12.261574 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:12.261582 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:07:12.261590 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:07:12.261598 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:07:12.261606 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:07:12.261617 | orchestrator | changed: [testbed-manager] 2026-04-05 01:07:12.261625 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:07:12.261633 | orchestrator | 2026-04-05 01:07:12.261641 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-05 01:07:12.261648 | orchestrator | Sunday 05 April 2026 01:05:53 +0000 (0:00:14.567) 0:01:44.975 ********** 2026-04-05 01:07:12.261656 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:12.261664 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:07:12.261672 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:07:12.261680 | orchestrator | 2026-04-05 01:07:12.261688 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-05 01:07:12.261696 | orchestrator | Sunday 05 April 2026 01:06:03 +0000 (0:00:10.120) 0:01:55.095 ********** 2026-04-05 01:07:12.261703 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:07:12.261711 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:12.261719 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:07:12.261727 | orchestrator | 2026-04-05 01:07:12.261735 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-05 01:07:12.261743 | orchestrator | Sunday 05 April 2026 01:06:13 +0000 (0:00:10.541) 0:02:05.637 ********** 2026-04-05 01:07:12.261750 | orchestrator | changed: [testbed-manager] 2026-04-05 01:07:12.261758 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:07:12.261766 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:07:12.261778 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:07:12.261786 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:07:12.261850 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:12.261860 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:07:12.261868 | orchestrator | 2026-04-05 01:07:12.261876 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-05 01:07:12.261884 | orchestrator | Sunday 05 April 2026 01:06:28 +0000 (0:00:14.854) 0:02:20.492 ********** 2026-04-05 01:07:12.261892 | orchestrator | changed: [testbed-manager] 2026-04-05 01:07:12.261900 | orchestrator | 2026-04-05 01:07:12.261907 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-05 01:07:12.261915 | orchestrator | Sunday 05 April 2026 01:06:41 +0000 (0:00:12.827) 0:02:33.319 ********** 2026-04-05 01:07:12.261923 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:07:12.261931 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:07:12.261938 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:12.261946 | orchestrator | 2026-04-05 01:07:12.261953 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-05 01:07:12.261961 | orchestrator | Sunday 05 April 2026 01:06:52 +0000 (0:00:11.473) 0:02:44.793 ********** 2026-04-05 01:07:12.261969 | orchestrator | changed: [testbed-manager] 2026-04-05 01:07:12.261980 | orchestrator | 2026-04-05 01:07:12.261988 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-05 01:07:12.261996 | orchestrator | Sunday 05 April 2026 01:06:58 +0000 (0:00:05.520) 0:02:50.313 ********** 2026-04-05 01:07:12.262004 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:07:12.262012 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:07:12.262055 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:07:12.262063 | orchestrator | 2026-04-05 01:07:12.262071 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:07:12.262079 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-04-05 01:07:12.262087 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 01:07:12.262095 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 01:07:12.262103 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 01:07:12.262112 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 01:07:12.262120 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 01:07:12.262128 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 01:07:12.262136 | orchestrator | 2026-04-05 01:07:12.262144 | orchestrator | 2026-04-05 01:07:12.262152 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:07:12.262160 | orchestrator | Sunday 05 April 2026 01:07:10 +0000 (0:00:12.417) 0:03:02.731 ********** 2026-04-05 01:07:12.262168 | orchestrator | =============================================================================== 2026-04-05 01:07:12.262175 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.69s 2026-04-05 01:07:12.262183 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.06s 2026-04-05 01:07:12.262205 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.86s 2026-04-05 01:07:12.262214 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.57s 2026-04-05 01:07:12.262229 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.83s 2026-04-05 01:07:12.262237 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.42s 2026-04-05 01:07:12.262245 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.47s 2026-04-05 01:07:12.262253 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.54s 2026-04-05 01:07:12.262267 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.12s 2026-04-05 01:07:12.262276 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.55s 2026-04-05 01:07:12.262284 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.30s 2026-04-05 01:07:12.262291 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.52s 2026-04-05 01:07:12.262299 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.97s 2026-04-05 01:07:12.262307 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.32s 2026-04-05 01:07:12.262315 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.19s 2026-04-05 01:07:12.262323 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.92s 2026-04-05 01:07:12.262330 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.59s 2026-04-05 01:07:12.262338 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.26s 2026-04-05 01:07:12.262346 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.22s 2026-04-05 01:07:12.262354 | orchestrator | prometheus : include_tasks ---------------------------------------------- 2.13s 2026-04-05 01:07:12.262367 | orchestrator | 2026-04-05 01:07:12 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:12.262375 | orchestrator | 2026-04-05 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:15.276643 | orchestrator | 2026-04-05 01:07:15 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:15.277088 | orchestrator | 2026-04-05 01:07:15 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:15.277652 | orchestrator | 2026-04-05 01:07:15 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:15.278409 | orchestrator | 2026-04-05 01:07:15 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:15.278444 | orchestrator | 2026-04-05 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:18.300436 | orchestrator | 2026-04-05 01:07:18 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:18.300646 | orchestrator | 2026-04-05 01:07:18 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:18.301404 | orchestrator | 2026-04-05 01:07:18 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:18.302286 | orchestrator | 2026-04-05 01:07:18 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:18.302334 | orchestrator | 2026-04-05 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:21.339438 | orchestrator | 2026-04-05 01:07:21 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:21.342422 | orchestrator | 2026-04-05 01:07:21 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:21.342496 | orchestrator | 2026-04-05 01:07:21 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:21.343707 | orchestrator | 2026-04-05 01:07:21 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:21.343869 | orchestrator | 2026-04-05 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:24.376930 | orchestrator | 2026-04-05 01:07:24 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:24.379256 | orchestrator | 2026-04-05 01:07:24 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:24.380814 | orchestrator | 2026-04-05 01:07:24 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:24.382004 | orchestrator | 2026-04-05 01:07:24 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:24.382147 | orchestrator | 2026-04-05 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:27.421604 | orchestrator | 2026-04-05 01:07:27 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:27.425110 | orchestrator | 2026-04-05 01:07:27 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:27.427058 | orchestrator | 2026-04-05 01:07:27 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:27.429483 | orchestrator | 2026-04-05 01:07:27 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:27.429558 | orchestrator | 2026-04-05 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:30.475597 | orchestrator | 2026-04-05 01:07:30 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:30.475673 | orchestrator | 2026-04-05 01:07:30 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:30.475681 | orchestrator | 2026-04-05 01:07:30 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:30.475688 | orchestrator | 2026-04-05 01:07:30 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:30.475695 | orchestrator | 2026-04-05 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:33.520004 | orchestrator | 2026-04-05 01:07:33 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:33.523628 | orchestrator | 2026-04-05 01:07:33 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:33.525910 | orchestrator | 2026-04-05 01:07:33 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:33.528049 | orchestrator | 2026-04-05 01:07:33 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:33.528114 | orchestrator | 2026-04-05 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:36.560905 | orchestrator | 2026-04-05 01:07:36 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:36.561832 | orchestrator | 2026-04-05 01:07:36 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:36.563415 | orchestrator | 2026-04-05 01:07:36 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:36.564500 | orchestrator | 2026-04-05 01:07:36 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:36.564536 | orchestrator | 2026-04-05 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:39.612738 | orchestrator | 2026-04-05 01:07:39 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:39.614886 | orchestrator | 2026-04-05 01:07:39 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:39.616622 | orchestrator | 2026-04-05 01:07:39 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:39.618341 | orchestrator | 2026-04-05 01:07:39 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:39.618404 | orchestrator | 2026-04-05 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:42.661974 | orchestrator | 2026-04-05 01:07:42 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:42.663432 | orchestrator | 2026-04-05 01:07:42 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:42.665033 | orchestrator | 2026-04-05 01:07:42 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:42.666930 | orchestrator | 2026-04-05 01:07:42 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:42.666956 | orchestrator | 2026-04-05 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:45.713148 | orchestrator | 2026-04-05 01:07:45 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:45.715172 | orchestrator | 2026-04-05 01:07:45 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:45.716863 | orchestrator | 2026-04-05 01:07:45 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:45.718145 | orchestrator | 2026-04-05 01:07:45 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:45.718179 | orchestrator | 2026-04-05 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:48.763439 | orchestrator | 2026-04-05 01:07:48 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:48.764876 | orchestrator | 2026-04-05 01:07:48 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:48.767235 | orchestrator | 2026-04-05 01:07:48 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:48.768771 | orchestrator | 2026-04-05 01:07:48 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:48.768829 | orchestrator | 2026-04-05 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:51.805788 | orchestrator | 2026-04-05 01:07:51 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:51.806896 | orchestrator | 2026-04-05 01:07:51 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:51.808051 | orchestrator | 2026-04-05 01:07:51 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:51.810496 | orchestrator | 2026-04-05 01:07:51 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:51.810530 | orchestrator | 2026-04-05 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:54.851635 | orchestrator | 2026-04-05 01:07:54 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:54.852946 | orchestrator | 2026-04-05 01:07:54 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:54.854588 | orchestrator | 2026-04-05 01:07:54 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:54.855995 | orchestrator | 2026-04-05 01:07:54 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:54.856034 | orchestrator | 2026-04-05 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:57.940555 | orchestrator | 2026-04-05 01:07:57 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:07:57.941926 | orchestrator | 2026-04-05 01:07:57 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:07:57.942731 | orchestrator | 2026-04-05 01:07:57 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:07:57.943731 | orchestrator | 2026-04-05 01:07:57 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:07:57.943753 | orchestrator | 2026-04-05 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:00.985471 | orchestrator | 2026-04-05 01:08:00 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:00.985544 | orchestrator | 2026-04-05 01:08:00 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:00.985552 | orchestrator | 2026-04-05 01:08:00 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:00.985558 | orchestrator | 2026-04-05 01:08:00 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:00.985564 | orchestrator | 2026-04-05 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:04.023973 | orchestrator | 2026-04-05 01:08:04 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:04.025670 | orchestrator | 2026-04-05 01:08:04 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:04.026428 | orchestrator | 2026-04-05 01:08:04 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:04.028816 | orchestrator | 2026-04-05 01:08:04 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:04.028910 | orchestrator | 2026-04-05 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:07.070872 | orchestrator | 2026-04-05 01:08:07 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:07.074868 | orchestrator | 2026-04-05 01:08:07 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:07.077415 | orchestrator | 2026-04-05 01:08:07 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:07.079370 | orchestrator | 2026-04-05 01:08:07 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:07.079428 | orchestrator | 2026-04-05 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:10.125606 | orchestrator | 2026-04-05 01:08:10 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:10.125683 | orchestrator | 2026-04-05 01:08:10 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:10.125696 | orchestrator | 2026-04-05 01:08:10 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:10.125706 | orchestrator | 2026-04-05 01:08:10 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:10.125716 | orchestrator | 2026-04-05 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:13.150636 | orchestrator | 2026-04-05 01:08:13 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:13.150736 | orchestrator | 2026-04-05 01:08:13 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:13.151228 | orchestrator | 2026-04-05 01:08:13 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:13.152273 | orchestrator | 2026-04-05 01:08:13 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:13.152348 | orchestrator | 2026-04-05 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:16.176535 | orchestrator | 2026-04-05 01:08:16 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:16.177202 | orchestrator | 2026-04-05 01:08:16 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:16.177833 | orchestrator | 2026-04-05 01:08:16 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:16.178753 | orchestrator | 2026-04-05 01:08:16 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:16.178773 | orchestrator | 2026-04-05 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:19.208417 | orchestrator | 2026-04-05 01:08:19 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:19.211495 | orchestrator | 2026-04-05 01:08:19 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:19.212596 | orchestrator | 2026-04-05 01:08:19 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:19.213452 | orchestrator | 2026-04-05 01:08:19 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:19.213477 | orchestrator | 2026-04-05 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:22.239803 | orchestrator | 2026-04-05 01:08:22 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:22.240137 | orchestrator | 2026-04-05 01:08:22 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:22.240981 | orchestrator | 2026-04-05 01:08:22 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:22.241479 | orchestrator | 2026-04-05 01:08:22 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:22.241508 | orchestrator | 2026-04-05 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:25.276794 | orchestrator | 2026-04-05 01:08:25 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:25.276877 | orchestrator | 2026-04-05 01:08:25 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:25.277630 | orchestrator | 2026-04-05 01:08:25 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:25.278456 | orchestrator | 2026-04-05 01:08:25 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:25.278502 | orchestrator | 2026-04-05 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:28.315136 | orchestrator | 2026-04-05 01:08:28 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:28.316029 | orchestrator | 2026-04-05 01:08:28 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:28.316763 | orchestrator | 2026-04-05 01:08:28 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:28.317677 | orchestrator | 2026-04-05 01:08:28 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:28.317891 | orchestrator | 2026-04-05 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:31.352458 | orchestrator | 2026-04-05 01:08:31 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:31.352547 | orchestrator | 2026-04-05 01:08:31 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:31.354149 | orchestrator | 2026-04-05 01:08:31 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:31.356077 | orchestrator | 2026-04-05 01:08:31 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:31.356106 | orchestrator | 2026-04-05 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:34.389591 | orchestrator | 2026-04-05 01:08:34 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:34.391847 | orchestrator | 2026-04-05 01:08:34 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:34.393957 | orchestrator | 2026-04-05 01:08:34 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:34.396118 | orchestrator | 2026-04-05 01:08:34 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:34.396146 | orchestrator | 2026-04-05 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:37.434992 | orchestrator | 2026-04-05 01:08:37 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:37.436085 | orchestrator | 2026-04-05 01:08:37 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state STARTED 2026-04-05 01:08:37.437364 | orchestrator | 2026-04-05 01:08:37 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:37.438297 | orchestrator | 2026-04-05 01:08:37 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:37.438353 | orchestrator | 2026-04-05 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:40.479854 | orchestrator | 2026-04-05 01:08:40 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:40.479972 | orchestrator | 2026-04-05 01:08:40 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:08:40.479996 | orchestrator | 2026-04-05 01:08:40 | INFO  | Task 6f0d4905-1f37-4414-99ea-58c037e4cf82 is in state SUCCESS 2026-04-05 01:08:40.480016 | orchestrator | 2026-04-05 01:08:40 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:40.480964 | orchestrator | 2026-04-05 01:08:40.481005 | orchestrator | 2026-04-05 01:08:40.481017 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:08:40.481029 | orchestrator | 2026-04-05 01:08:40.481041 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:08:40.481052 | orchestrator | Sunday 05 April 2026 01:05:29 +0000 (0:00:00.292) 0:00:00.292 ********** 2026-04-05 01:08:40.481095 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:08:40.481109 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:08:40.481119 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:08:40.481234 | orchestrator | 2026-04-05 01:08:40.481248 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:08:40.481260 | orchestrator | Sunday 05 April 2026 01:05:30 +0000 (0:00:00.360) 0:00:00.653 ********** 2026-04-05 01:08:40.481271 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-05 01:08:40.481283 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-05 01:08:40.481293 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-05 01:08:40.481304 | orchestrator | 2026-04-05 01:08:40.481370 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-05 01:08:40.481391 | orchestrator | 2026-04-05 01:08:40.481414 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 01:08:40.481435 | orchestrator | Sunday 05 April 2026 01:05:30 +0000 (0:00:00.440) 0:00:01.094 ********** 2026-04-05 01:08:40.481450 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:08:40.481462 | orchestrator | 2026-04-05 01:08:40.481473 | orchestrator | TASK [service-ks-register : glance | Creating/deleting services] *************** 2026-04-05 01:08:40.481518 | orchestrator | Sunday 05 April 2026 01:05:31 +0000 (0:00:00.639) 0:00:01.734 ********** 2026-04-05 01:08:40.481537 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-05 01:08:40.481622 | orchestrator | 2026-04-05 01:08:40.481637 | orchestrator | TASK [service-ks-register : glance | Creating/deleting endpoints] ************** 2026-04-05 01:08:40.481654 | orchestrator | Sunday 05 April 2026 01:05:36 +0000 (0:00:05.045) 0:00:06.779 ********** 2026-04-05 01:08:40.481678 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-05 01:08:40.481704 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-05 01:08:40.481722 | orchestrator | 2026-04-05 01:08:40.481740 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-05 01:08:40.481758 | orchestrator | Sunday 05 April 2026 01:05:45 +0000 (0:00:08.803) 0:00:15.583 ********** 2026-04-05 01:08:40.481776 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-05 01:08:40.481794 | orchestrator | 2026-04-05 01:08:40.481847 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-05 01:08:40.481866 | orchestrator | Sunday 05 April 2026 01:05:48 +0000 (0:00:03.839) 0:00:19.422 ********** 2026-04-05 01:08:40.481885 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-05 01:08:40.481902 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:08:40.481913 | orchestrator | 2026-04-05 01:08:40.481922 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-05 01:08:40.481932 | orchestrator | Sunday 05 April 2026 01:05:53 +0000 (0:00:04.400) 0:00:23.822 ********** 2026-04-05 01:08:40.481942 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:08:40.481952 | orchestrator | 2026-04-05 01:08:40.481961 | orchestrator | TASK [service-ks-register : glance | Granting/revoking user roles] ************* 2026-04-05 01:08:40.481971 | orchestrator | Sunday 05 April 2026 01:05:56 +0000 (0:00:03.341) 0:00:27.163 ********** 2026-04-05 01:08:40.481982 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-05 01:08:40.481998 | orchestrator | 2026-04-05 01:08:40.482090 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-05 01:08:40.482187 | orchestrator | Sunday 05 April 2026 01:06:00 +0000 (0:00:04.253) 0:00:31.417 ********** 2026-04-05 01:08:40.482230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.482264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.482283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.482295 | orchestrator | 2026-04-05 01:08:40.482311 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 01:08:40.482408 | orchestrator | Sunday 05 April 2026 01:06:05 +0000 (0:00:04.582) 0:00:36.000 ********** 2026-04-05 01:08:40.482436 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:08:40.482449 | orchestrator | 2026-04-05 01:08:40.482480 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-05 01:08:40.482490 | orchestrator | Sunday 05 April 2026 01:06:06 +0000 (0:00:00.759) 0:00:36.759 ********** 2026-04-05 01:08:40.482500 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:08:40.482509 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:08:40.482519 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:08:40.482528 | orchestrator | 2026-04-05 01:08:40.482540 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-05 01:08:40.482558 | orchestrator | Sunday 05 April 2026 01:06:10 +0000 (0:00:04.020) 0:00:40.780 ********** 2026-04-05 01:08:40.482568 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 01:08:40.482580 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 01:08:40.482590 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 01:08:40.482599 | orchestrator | 2026-04-05 01:08:40.482609 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-05 01:08:40.482618 | orchestrator | Sunday 05 April 2026 01:06:12 +0000 (0:00:01.888) 0:00:42.669 ********** 2026-04-05 01:08:40.482628 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 01:08:40.482638 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 01:08:40.482647 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 01:08:40.482657 | orchestrator | 2026-04-05 01:08:40.482666 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-05 01:08:40.482676 | orchestrator | Sunday 05 April 2026 01:06:13 +0000 (0:00:01.432) 0:00:44.102 ********** 2026-04-05 01:08:40.482686 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:08:40.482696 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:08:40.482706 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:08:40.482715 | orchestrator | 2026-04-05 01:08:40.482725 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-05 01:08:40.482735 | orchestrator | Sunday 05 April 2026 01:06:14 +0000 (0:00:00.757) 0:00:44.860 ********** 2026-04-05 01:08:40.482744 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.482754 | orchestrator | 2026-04-05 01:08:40.482763 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-05 01:08:40.482773 | orchestrator | Sunday 05 April 2026 01:06:14 +0000 (0:00:00.262) 0:00:45.123 ********** 2026-04-05 01:08:40.482783 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.482792 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.482802 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.482811 | orchestrator | 2026-04-05 01:08:40.482821 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 01:08:40.482830 | orchestrator | Sunday 05 April 2026 01:06:15 +0000 (0:00:00.512) 0:00:45.635 ********** 2026-04-05 01:08:40.482840 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:08:40.482850 | orchestrator | 2026-04-05 01:08:40.482867 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-05 01:08:40.482883 | orchestrator | Sunday 05 April 2026 01:06:16 +0000 (0:00:01.319) 0:00:46.954 ********** 2026-04-05 01:08:40.482904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.482925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.482941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.482958 | orchestrator | 2026-04-05 01:08:40.482968 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-05 01:08:40.482978 | orchestrator | Sunday 05 April 2026 01:06:22 +0000 (0:00:06.277) 0:00:53.232 ********** 2026-04-05 01:08:40.482996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:08:40.483007 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.483023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:08:40.483040 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.483058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:08:40.483069 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.483079 | orchestrator | 2026-04-05 01:08:40.483089 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-05 01:08:40.483098 | orchestrator | Sunday 05 April 2026 01:06:25 +0000 (0:00:03.218) 0:00:56.451 ********** 2026-04-05 01:08:40.483114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:08:40.483140 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.483157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:08:40.483174 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.483267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:08:40.483300 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.483340 | orchestrator | 2026-04-05 01:08:40.483357 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-05 01:08:40.483374 | orchestrator | Sunday 05 April 2026 01:06:29 +0000 (0:00:03.623) 0:01:00.075 ********** 2026-04-05 01:08:40.483392 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.483418 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.483436 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.483453 | orchestrator | 2026-04-05 01:08:40.483470 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-05 01:08:40.483484 | orchestrator | Sunday 05 April 2026 01:06:33 +0000 (0:00:04.257) 0:01:04.332 ********** 2026-04-05 01:08:40.483505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.483517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.483544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.483556 | orchestrator | 2026-04-05 01:08:40.483572 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-05 01:08:40.483589 | orchestrator | Sunday 05 April 2026 01:06:38 +0000 (0:00:04.218) 0:01:08.550 ********** 2026-04-05 01:08:40.483606 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:08:40.483622 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:08:40.483638 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:08:40.483654 | orchestrator | 2026-04-05 01:08:40.483669 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-05 01:08:40.483684 | orchestrator | Sunday 05 April 2026 01:06:45 +0000 (0:00:07.561) 0:01:16.112 ********** 2026-04-05 01:08:40.483700 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.483718 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.483734 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.483750 | orchestrator | 2026-04-05 01:08:40.483766 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-05 01:08:40.483782 | orchestrator | Sunday 05 April 2026 01:06:49 +0000 (0:00:03.629) 0:01:19.741 ********** 2026-04-05 01:08:40.483796 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.483811 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.483827 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.483843 | orchestrator | 2026-04-05 01:08:40.483858 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-05 01:08:40.483872 | orchestrator | Sunday 05 April 2026 01:06:52 +0000 (0:00:03.158) 0:01:22.900 ********** 2026-04-05 01:08:40.483888 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.483904 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.483921 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.483936 | orchestrator | 2026-04-05 01:08:40.483952 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-05 01:08:40.483968 | orchestrator | Sunday 05 April 2026 01:06:56 +0000 (0:00:03.736) 0:01:26.636 ********** 2026-04-05 01:08:40.483985 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.484014 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.484031 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.484047 | orchestrator | 2026-04-05 01:08:40.484062 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-05 01:08:40.484072 | orchestrator | Sunday 05 April 2026 01:06:56 +0000 (0:00:00.352) 0:01:26.989 ********** 2026-04-05 01:08:40.484082 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 01:08:40.484092 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.484102 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 01:08:40.484112 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.484121 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 01:08:40.484131 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.484141 | orchestrator | 2026-04-05 01:08:40.484151 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-05 01:08:40.484161 | orchestrator | Sunday 05 April 2026 01:07:02 +0000 (0:00:05.714) 0:01:32.704 ********** 2026-04-05 01:08:40.484170 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.484180 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.484189 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.484199 | orchestrator | 2026-04-05 01:08:40.484209 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-05 01:08:40.484218 | orchestrator | Sunday 05 April 2026 01:07:05 +0000 (0:00:03.318) 0:01:36.022 ********** 2026-04-05 01:08:40.484228 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.484238 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.484247 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.484257 | orchestrator | 2026-04-05 01:08:40.484267 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-04-05 01:08:40.484277 | orchestrator | Sunday 05 April 2026 01:07:08 +0000 (0:00:03.067) 0:01:39.090 ********** 2026-04-05 01:08:40.484306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.484347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.484376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:08:40.484388 | orchestrator | 2026-04-05 01:08:40.484398 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-04-05 01:08:40.484408 | orchestrator | Sunday 05 April 2026 01:07:12 +0000 (0:00:04.291) 0:01:43.381 ********** 2026-04-05 01:08:40.484418 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:08:40.484428 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:08:40.484437 | orchestrator | } 2026-04-05 01:08:40.484447 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:08:40.484457 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:08:40.484466 | orchestrator | } 2026-04-05 01:08:40.484482 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:08:40.484493 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:08:40.484509 | orchestrator | } 2026-04-05 01:08:40.484519 | orchestrator | 2026-04-05 01:08:40.484528 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:08:40.484538 | orchestrator | Sunday 05 April 2026 01:07:13 +0000 (0:00:00.438) 0:01:43.819 ********** 2026-04-05 01:08:40.484549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:08:40.484560 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.484575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:08:40.484587 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.484605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:08:40.484622 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.484632 | orchestrator | 2026-04-05 01:08:40.484642 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 01:08:40.484652 | orchestrator | Sunday 05 April 2026 01:07:17 +0000 (0:00:04.119) 0:01:47.939 ********** 2026-04-05 01:08:40.484661 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:08:40.484671 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:08:40.484681 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:08:40.484691 | orchestrator | 2026-04-05 01:08:40.484708 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-05 01:08:40.484724 | orchestrator | Sunday 05 April 2026 01:07:17 +0000 (0:00:00.512) 0:01:48.451 ********** 2026-04-05 01:08:40.484740 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:08:40.484757 | orchestrator | 2026-04-05 01:08:40.484772 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-05 01:08:40.484788 | orchestrator | Sunday 05 April 2026 01:07:20 +0000 (0:00:02.318) 0:01:50.770 ********** 2026-04-05 01:08:40.484806 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:08:40.484823 | orchestrator | 2026-04-05 01:08:40.484838 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-05 01:08:40.484856 | orchestrator | Sunday 05 April 2026 01:07:22 +0000 (0:00:02.162) 0:01:52.932 ********** 2026-04-05 01:08:40.484873 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:08:40.484889 | orchestrator | 2026-04-05 01:08:40.484906 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-05 01:08:40.484923 | orchestrator | Sunday 05 April 2026 01:07:24 +0000 (0:00:02.159) 0:01:55.091 ********** 2026-04-05 01:08:40.484940 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:08:40.484957 | orchestrator | 2026-04-05 01:08:40.484972 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-05 01:08:40.484989 | orchestrator | Sunday 05 April 2026 01:07:53 +0000 (0:00:28.711) 0:02:23.803 ********** 2026-04-05 01:08:40.485014 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:08:40.485031 | orchestrator | 2026-04-05 01:08:40.485047 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 01:08:40.485063 | orchestrator | Sunday 05 April 2026 01:07:55 +0000 (0:00:02.364) 0:02:26.167 ********** 2026-04-05 01:08:40.485080 | orchestrator | 2026-04-05 01:08:40.485096 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 01:08:40.485123 | orchestrator | Sunday 05 April 2026 01:07:55 +0000 (0:00:00.061) 0:02:26.229 ********** 2026-04-05 01:08:40.485140 | orchestrator | 2026-04-05 01:08:40.485156 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 01:08:40.485172 | orchestrator | Sunday 05 April 2026 01:07:55 +0000 (0:00:00.061) 0:02:26.291 ********** 2026-04-05 01:08:40.485189 | orchestrator | 2026-04-05 01:08:40.485206 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-05 01:08:40.485222 | orchestrator | Sunday 05 April 2026 01:07:55 +0000 (0:00:00.077) 0:02:26.368 ********** 2026-04-05 01:08:40.485239 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:08:40.485256 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:08:40.485272 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:08:40.485285 | orchestrator | 2026-04-05 01:08:40.485296 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:08:40.485345 | orchestrator | testbed-node-0 : ok=27  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-05 01:08:40.485366 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 01:08:40.485383 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 01:08:40.485400 | orchestrator | 2026-04-05 01:08:40.485416 | orchestrator | 2026-04-05 01:08:40.485442 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:08:40.485458 | orchestrator | Sunday 05 April 2026 01:08:37 +0000 (0:00:41.669) 0:03:08.038 ********** 2026-04-05 01:08:40.485474 | orchestrator | =============================================================================== 2026-04-05 01:08:40.485491 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.67s 2026-04-05 01:08:40.485507 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.71s 2026-04-05 01:08:40.485523 | orchestrator | service-ks-register : glance | Creating/deleting endpoints -------------- 8.80s 2026-04-05 01:08:40.485533 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.56s 2026-04-05 01:08:40.485542 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.28s 2026-04-05 01:08:40.485552 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.71s 2026-04-05 01:08:40.485561 | orchestrator | service-ks-register : glance | Creating/deleting services --------------- 5.05s 2026-04-05 01:08:40.485571 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.58s 2026-04-05 01:08:40.485586 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.40s 2026-04-05 01:08:40.485603 | orchestrator | service-check-containers : glance | Check containers -------------------- 4.29s 2026-04-05 01:08:40.485620 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.26s 2026-04-05 01:08:40.485635 | orchestrator | service-ks-register : glance | Granting/revoking user roles ------------- 4.25s 2026-04-05 01:08:40.485652 | orchestrator | glance : Copying over config.json files for services -------------------- 4.22s 2026-04-05 01:08:40.485668 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.12s 2026-04-05 01:08:40.485684 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.02s 2026-04-05 01:08:40.485701 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.84s 2026-04-05 01:08:40.485719 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.74s 2026-04-05 01:08:40.485735 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.63s 2026-04-05 01:08:40.485751 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.62s 2026-04-05 01:08:40.485762 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.34s 2026-04-05 01:08:40.485781 | orchestrator | 2026-04-05 01:08:40 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:40.485791 | orchestrator | 2026-04-05 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:43.517853 | orchestrator | 2026-04-05 01:08:43 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:43.518807 | orchestrator | 2026-04-05 01:08:43 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:08:43.523004 | orchestrator | 2026-04-05 01:08:43 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:43.524472 | orchestrator | 2026-04-05 01:08:43 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:43.524549 | orchestrator | 2026-04-05 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:46.577127 | orchestrator | 2026-04-05 01:08:46 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:46.578292 | orchestrator | 2026-04-05 01:08:46 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:08:46.579125 | orchestrator | 2026-04-05 01:08:46 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:46.580089 | orchestrator | 2026-04-05 01:08:46 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:46.582362 | orchestrator | 2026-04-05 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:49.621821 | orchestrator | 2026-04-05 01:08:49 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:49.622240 | orchestrator | 2026-04-05 01:08:49 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:08:49.622892 | orchestrator | 2026-04-05 01:08:49 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:49.623874 | orchestrator | 2026-04-05 01:08:49 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:49.625494 | orchestrator | 2026-04-05 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:52.658769 | orchestrator | 2026-04-05 01:08:52 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:52.659092 | orchestrator | 2026-04-05 01:08:52 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:08:52.660120 | orchestrator | 2026-04-05 01:08:52 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:52.661249 | orchestrator | 2026-04-05 01:08:52 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:52.661905 | orchestrator | 2026-04-05 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:55.703986 | orchestrator | 2026-04-05 01:08:55 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:55.705052 | orchestrator | 2026-04-05 01:08:55 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:08:55.706608 | orchestrator | 2026-04-05 01:08:55 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:55.707102 | orchestrator | 2026-04-05 01:08:55 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:55.707249 | orchestrator | 2026-04-05 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:58.739805 | orchestrator | 2026-04-05 01:08:58 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:08:58.740620 | orchestrator | 2026-04-05 01:08:58 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:08:58.741514 | orchestrator | 2026-04-05 01:08:58 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:08:58.742126 | orchestrator | 2026-04-05 01:08:58 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:08:58.742241 | orchestrator | 2026-04-05 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:01.927361 | orchestrator | 2026-04-05 01:09:01 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:01.928097 | orchestrator | 2026-04-05 01:09:01 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:01.929158 | orchestrator | 2026-04-05 01:09:01 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:09:01.930480 | orchestrator | 2026-04-05 01:09:01 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:01.930520 | orchestrator | 2026-04-05 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:04.971187 | orchestrator | 2026-04-05 01:09:04 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:04.973081 | orchestrator | 2026-04-05 01:09:04 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:04.973744 | orchestrator | 2026-04-05 01:09:04 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:09:04.975085 | orchestrator | 2026-04-05 01:09:04 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:04.975213 | orchestrator | 2026-04-05 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:08.014583 | orchestrator | 2026-04-05 01:09:08 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:08.015714 | orchestrator | 2026-04-05 01:09:08 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:08.017285 | orchestrator | 2026-04-05 01:09:08 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:09:08.018286 | orchestrator | 2026-04-05 01:09:08 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:08.018620 | orchestrator | 2026-04-05 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:11.070709 | orchestrator | 2026-04-05 01:09:11 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:11.072229 | orchestrator | 2026-04-05 01:09:11 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:11.073103 | orchestrator | 2026-04-05 01:09:11 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:09:11.073834 | orchestrator | 2026-04-05 01:09:11 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:11.073962 | orchestrator | 2026-04-05 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:14.110757 | orchestrator | 2026-04-05 01:09:14 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:14.123438 | orchestrator | 2026-04-05 01:09:14 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:14.125846 | orchestrator | 2026-04-05 01:09:14 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state STARTED 2026-04-05 01:09:14.126678 | orchestrator | 2026-04-05 01:09:14 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:14.126709 | orchestrator | 2026-04-05 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:17.166841 | orchestrator | 2026-04-05 01:09:17 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:17.167601 | orchestrator | 2026-04-05 01:09:17 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:17.169766 | orchestrator | 2026-04-05 01:09:17 | INFO  | Task 3209e145-a6d4-44ab-be65-8c9c476b3f85 is in state SUCCESS 2026-04-05 01:09:17.172838 | orchestrator | 2026-04-05 01:09:17 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:17.174471 | orchestrator | 2026-04-05 01:09:17.174515 | orchestrator | 2026-04-05 01:09:17.174529 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:09:17.174589 | orchestrator | 2026-04-05 01:09:17.174603 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:09:17.174698 | orchestrator | Sunday 05 April 2026 01:05:54 +0000 (0:00:00.512) 0:00:00.513 ********** 2026-04-05 01:09:17.174713 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:09:17.174726 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:09:17.174737 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:09:17.174748 | orchestrator | 2026-04-05 01:09:17.174759 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:09:17.174801 | orchestrator | Sunday 05 April 2026 01:05:54 +0000 (0:00:00.525) 0:00:01.038 ********** 2026-04-05 01:09:17.174813 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-05 01:09:17.174825 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-05 01:09:17.174836 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-05 01:09:17.174906 | orchestrator | 2026-04-05 01:09:17.174919 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-05 01:09:17.175021 | orchestrator | 2026-04-05 01:09:17.175034 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 01:09:17.175046 | orchestrator | Sunday 05 April 2026 01:05:54 +0000 (0:00:00.372) 0:00:01.411 ********** 2026-04-05 01:09:17.175057 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:09:17.175072 | orchestrator | 2026-04-05 01:09:17.175086 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting services] *************** 2026-04-05 01:09:17.175100 | orchestrator | Sunday 05 April 2026 01:05:55 +0000 (0:00:00.634) 0:00:02.045 ********** 2026-04-05 01:09:17.175114 | orchestrator | changed: [testbed-node-0] => (item=cinder (block-storage)) 2026-04-05 01:09:17.175127 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-05 01:09:17.175186 | orchestrator | 2026-04-05 01:09:17.175199 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting endpoints] ************** 2026-04-05 01:09:17.175212 | orchestrator | Sunday 05 April 2026 01:06:03 +0000 (0:00:08.114) 0:00:10.159 ********** 2026-04-05 01:09:17.175248 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api-int.testbed.osism.xyz:8776/v3 -> internal) 2026-04-05 01:09:17.175262 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api.testbed.osism.xyz:8776/v3 -> public) 2026-04-05 01:09:17.175310 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-05 01:09:17.175322 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-05 01:09:17.175333 | orchestrator | 2026-04-05 01:09:17.175388 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-05 01:09:17.175401 | orchestrator | Sunday 05 April 2026 01:06:17 +0000 (0:00:14.263) 0:00:24.422 ********** 2026-04-05 01:09:17.175413 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:09:17.175424 | orchestrator | 2026-04-05 01:09:17.175434 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-05 01:09:17.175445 | orchestrator | Sunday 05 April 2026 01:06:21 +0000 (0:00:03.722) 0:00:28.145 ********** 2026-04-05 01:09:17.175481 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-05 01:09:17.175493 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:09:17.175504 | orchestrator | 2026-04-05 01:09:17.175515 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-05 01:09:17.175526 | orchestrator | Sunday 05 April 2026 01:06:25 +0000 (0:00:04.092) 0:00:32.238 ********** 2026-04-05 01:09:17.175537 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:09:17.175548 | orchestrator | 2026-04-05 01:09:17.175559 | orchestrator | TASK [service-ks-register : cinder | Granting/revoking user roles] ************* 2026-04-05 01:09:17.175569 | orchestrator | Sunday 05 April 2026 01:06:29 +0000 (0:00:03.467) 0:00:35.706 ********** 2026-04-05 01:09:17.175580 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-05 01:09:17.175591 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-05 01:09:17.175602 | orchestrator | 2026-04-05 01:09:17.175613 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-05 01:09:17.175624 | orchestrator | Sunday 05 April 2026 01:06:37 +0000 (0:00:07.839) 0:00:43.546 ********** 2026-04-05 01:09:17.175659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.175677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.175691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.175711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.175731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.175745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.175766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.175778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.175791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.175815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.175827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.175845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.175857 | orchestrator | 2026-04-05 01:09:17.175868 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 01:09:17.175880 | orchestrator | Sunday 05 April 2026 01:06:40 +0000 (0:00:03.273) 0:00:46.819 ********** 2026-04-05 01:09:17.175890 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:17.175906 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:17.175925 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:17.175943 | orchestrator | 2026-04-05 01:09:17.175962 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 01:09:17.175981 | orchestrator | Sunday 05 April 2026 01:06:40 +0000 (0:00:00.396) 0:00:47.215 ********** 2026-04-05 01:09:17.176000 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:09:17.176029 | orchestrator | 2026-04-05 01:09:17.176050 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-05 01:09:17.176069 | orchestrator | Sunday 05 April 2026 01:06:41 +0000 (0:00:00.555) 0:00:47.771 ********** 2026-04-05 01:09:17.176088 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-05 01:09:17.176107 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-05 01:09:17.176127 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-05 01:09:17.176147 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-05 01:09:17.176167 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-05 01:09:17.176203 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-05 01:09:17.176221 | orchestrator | 2026-04-05 01:09:17.176239 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-05 01:09:17.176255 | orchestrator | Sunday 05 April 2026 01:06:45 +0000 (0:00:03.711) 0:00:51.482 ********** 2026-04-05 01:09:17.176283 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 01:09:17.176306 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 01:09:17.176339 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 01:09:17.176389 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 01:09:17.176431 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 01:09:17.176452 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 01:09:17.176474 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 01:09:17.176505 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 01:09:17.176527 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 01:09:17.176567 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 01:09:17.176588 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 01:09:17.176621 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 01:09:17.176635 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 01:09:17.176656 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 01:09:17.176685 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 01:09:17.176697 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 01:09:17.176719 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 01:09:17.176731 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 01:09:17.176761 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 01:09:17.176773 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 01:09:17.176785 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 01:09:17.177510 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 01:09:17.177554 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 01:09:17.177574 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 01:09:17.177586 | orchestrator | 2026-04-05 01:09:17.177598 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-05 01:09:17.177609 | orchestrator | Sunday 05 April 2026 01:06:51 +0000 (0:00:06.047) 0:00:57.530 ********** 2026-04-05 01:09:17.177621 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 01:09:17.177634 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 01:09:17.177645 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 01:09:17.177656 | orchestrator | 2026-04-05 01:09:17.177667 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-05 01:09:17.177678 | orchestrator | Sunday 05 April 2026 01:06:52 +0000 (0:00:01.589) 0:00:59.119 ********** 2026-04-05 01:09:17.177689 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 01:09:17.177700 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 01:09:17.177711 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 01:09:17.177722 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-05 01:09:17.177734 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-05 01:09:17.177745 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-05 01:09:17.177762 | orchestrator | 2026-04-05 01:09:17.177773 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-05 01:09:17.177784 | orchestrator | Sunday 05 April 2026 01:06:55 +0000 (0:00:03.236) 0:01:02.356 ********** 2026-04-05 01:09:17.177803 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-05 01:09:17.177815 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-05 01:09:17.177826 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-05 01:09:17.177837 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-05 01:09:17.177847 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-05 01:09:17.177857 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-05 01:09:17.177866 | orchestrator | 2026-04-05 01:09:17.177877 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-05 01:09:17.177887 | orchestrator | Sunday 05 April 2026 01:06:56 +0000 (0:00:01.057) 0:01:03.413 ********** 2026-04-05 01:09:17.177897 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:17.177907 | orchestrator | 2026-04-05 01:09:17.177917 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-05 01:09:17.177927 | orchestrator | Sunday 05 April 2026 01:06:57 +0000 (0:00:00.356) 0:01:03.770 ********** 2026-04-05 01:09:17.177964 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:17.177974 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:17.177984 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:17.177994 | orchestrator | 2026-04-05 01:09:17.178004 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 01:09:17.178066 | orchestrator | Sunday 05 April 2026 01:06:57 +0000 (0:00:00.353) 0:01:04.123 ********** 2026-04-05 01:09:17.178082 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:09:17.178093 | orchestrator | 2026-04-05 01:09:17.178104 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-05 01:09:17.178114 | orchestrator | Sunday 05 April 2026 01:06:58 +0000 (0:00:00.742) 0:01:04.866 ********** 2026-04-05 01:09:17.178131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.178143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.178170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.178183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178303 | orchestrator | 2026-04-05 01:09:17.178313 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-05 01:09:17.178323 | orchestrator | Sunday 05 April 2026 01:07:03 +0000 (0:00:05.522) 0:01:10.388 ********** 2026-04-05 01:09:17.178333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.178350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.178426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178466 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:17.178483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178494 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:17.178504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.178515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178557 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:17.178567 | orchestrator | 2026-04-05 01:09:17.178577 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-05 01:09:17.178587 | orchestrator | Sunday 05 April 2026 01:07:04 +0000 (0:00:01.032) 0:01:11.421 ********** 2026-04-05 01:09:17.178604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.178615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178661 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:17.178672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.178689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178720 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:17.178735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.178751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.178788 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:17.178798 | orchestrator | 2026-04-05 01:09:17.178808 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-05 01:09:17.178818 | orchestrator | Sunday 05 April 2026 01:07:05 +0000 (0:00:00.897) 0:01:12.318 ********** 2026-04-05 01:09:17.178828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.178852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.178863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.178881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.178995 | orchestrator | 2026-04-05 01:09:17.179004 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-05 01:09:17.179014 | orchestrator | Sunday 05 April 2026 01:07:10 +0000 (0:00:04.882) 0:01:17.201 ********** 2026-04-05 01:09:17.179028 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-05 01:09:17.179038 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:17.179048 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-05 01:09:17.179057 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:17.179067 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-05 01:09:17.179077 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:17.179087 | orchestrator | 2026-04-05 01:09:17.179096 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-05 01:09:17.179106 | orchestrator | Sunday 05 April 2026 01:07:12 +0000 (0:00:01.553) 0:01:18.755 ********** 2026-04-05 01:09:17.179115 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:09:17.179125 | orchestrator | 2026-04-05 01:09:17.179134 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-05 01:09:17.179144 | orchestrator | Sunday 05 April 2026 01:07:13 +0000 (0:00:00.890) 0:01:19.645 ********** 2026-04-05 01:09:17.179153 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:17.179163 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:17.179172 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:17.179182 | orchestrator | 2026-04-05 01:09:17.179191 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-05 01:09:17.179201 | orchestrator | Sunday 05 April 2026 01:07:16 +0000 (0:00:02.802) 0:01:22.447 ********** 2026-04-05 01:09:17.179216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.179228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.179251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.179262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179391 | orchestrator | 2026-04-05 01:09:17.179406 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-05 01:09:17.179416 | orchestrator | Sunday 05 April 2026 01:07:26 +0000 (0:00:10.925) 0:01:33.373 ********** 2026-04-05 01:09:17.179426 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:17.179435 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:17.179451 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:17.179461 | orchestrator | 2026-04-05 01:09:17.179471 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-05 01:09:17.179480 | orchestrator | Sunday 05 April 2026 01:07:28 +0000 (0:00:01.606) 0:01:34.979 ********** 2026-04-05 01:09:17.179490 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:17.179500 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:17.179522 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:17.179532 | orchestrator | 2026-04-05 01:09:17.179541 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-05 01:09:17.179561 | orchestrator | Sunday 05 April 2026 01:07:30 +0000 (0:00:01.554) 0:01:36.533 ********** 2026-04-05 01:09:17.179572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.179591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.179602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.179612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.179623 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:17.179640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.179658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.179674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.179685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.179695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.179717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.179727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.179737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.179747 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:17.179757 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:17.179766 | orchestrator | 2026-04-05 01:09:17.179776 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-05 01:09:17.179785 | orchestrator | Sunday 05 April 2026 01:07:31 +0000 (0:00:01.152) 0:01:37.686 ********** 2026-04-05 01:09:17.179795 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:17.179805 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:17.179819 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:17.179829 | orchestrator | 2026-04-05 01:09:17.179838 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-04-05 01:09:17.179848 | orchestrator | Sunday 05 April 2026 01:07:31 +0000 (0:00:00.339) 0:01:38.026 ********** 2026-04-05 01:09:17.179859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.179881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.179892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:09:17.179903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.179997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.180007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:17.180024 | orchestrator | 2026-04-05 01:09:17.180033 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-04-05 01:09:17.180043 | orchestrator | Sunday 05 April 2026 01:07:34 +0000 (0:00:03.252) 0:01:41.278 ********** 2026-04-05 01:09:17.180053 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:09:17.180062 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:09:17.180072 | orchestrator | } 2026-04-05 01:09:17.180082 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:09:17.180091 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:09:17.180101 | orchestrator | } 2026-04-05 01:09:17.180110 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:09:17.180119 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:09:17.180129 | orchestrator | } 2026-04-05 01:09:17.180138 | orchestrator | 2026-04-05 01:09:17.180148 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:09:17.180157 | orchestrator | Sunday 05 April 2026 01:07:35 +0000 (0:00:00.319) 0:01:41.598 ********** 2026-04-05 01:09:17.180175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.180186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.180201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.180212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.180228 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:17.180238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.180254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-schedu2026-04-05 01:09:17 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:17.180265 | orchestrator | 2026-04-05 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:17.180318 | orchestrator | ler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.180330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.180341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.180373 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:17.180385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:09:17.180403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.180419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.180429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:09:17.180439 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:17.180449 | orchestrator | 2026-04-05 01:09:17.180458 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 01:09:17.180468 | orchestrator | Sunday 05 April 2026 01:07:36 +0000 (0:00:01.087) 0:01:42.686 ********** 2026-04-05 01:09:17.180478 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:17.180488 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:17.180497 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:17.180507 | orchestrator | 2026-04-05 01:09:17.180524 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-05 01:09:17.180540 | orchestrator | Sunday 05 April 2026 01:07:36 +0000 (0:00:00.273) 0:01:42.959 ********** 2026-04-05 01:09:17.180556 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:17.180572 | orchestrator | 2026-04-05 01:09:17.180587 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-05 01:09:17.180604 | orchestrator | Sunday 05 April 2026 01:07:38 +0000 (0:00:02.327) 0:01:45.287 ********** 2026-04-05 01:09:17.180630 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:17.180644 | orchestrator | 2026-04-05 01:09:17.180654 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-05 01:09:17.180664 | orchestrator | Sunday 05 April 2026 01:07:41 +0000 (0:00:02.573) 0:01:47.861 ********** 2026-04-05 01:09:17.180674 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:17.180684 | orchestrator | 2026-04-05 01:09:17.180694 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 01:09:17.180712 | orchestrator | Sunday 05 April 2026 01:08:01 +0000 (0:00:20.091) 0:02:07.953 ********** 2026-04-05 01:09:17.180722 | orchestrator | 2026-04-05 01:09:17.180732 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 01:09:17.180741 | orchestrator | Sunday 05 April 2026 01:08:01 +0000 (0:00:00.160) 0:02:08.113 ********** 2026-04-05 01:09:17.180751 | orchestrator | 2026-04-05 01:09:17.180761 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 01:09:17.180770 | orchestrator | Sunday 05 April 2026 01:08:01 +0000 (0:00:00.089) 0:02:08.203 ********** 2026-04-05 01:09:17.180780 | orchestrator | 2026-04-05 01:09:17.180789 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-05 01:09:17.180799 | orchestrator | Sunday 05 April 2026 01:08:02 +0000 (0:00:00.309) 0:02:08.512 ********** 2026-04-05 01:09:17.180809 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:17.180818 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:17.180828 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:17.180837 | orchestrator | 2026-04-05 01:09:17.180847 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-05 01:09:17.180856 | orchestrator | Sunday 05 April 2026 01:08:27 +0000 (0:00:25.089) 0:02:33.602 ********** 2026-04-05 01:09:17.180866 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:17.180876 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:17.180885 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:17.180895 | orchestrator | 2026-04-05 01:09:17.180904 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-05 01:09:17.180914 | orchestrator | Sunday 05 April 2026 01:08:37 +0000 (0:00:10.044) 0:02:43.646 ********** 2026-04-05 01:09:17.180924 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:17.180933 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:17.180943 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:17.180952 | orchestrator | 2026-04-05 01:09:17.180962 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-05 01:09:17.180972 | orchestrator | Sunday 05 April 2026 01:09:06 +0000 (0:00:29.628) 0:03:13.275 ********** 2026-04-05 01:09:17.180982 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:17.180991 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:17.181000 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:17.181010 | orchestrator | 2026-04-05 01:09:17.181020 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-05 01:09:17.181030 | orchestrator | Sunday 05 April 2026 01:09:14 +0000 (0:00:08.107) 0:03:21.383 ********** 2026-04-05 01:09:17.181039 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:17.181049 | orchestrator | 2026-04-05 01:09:17.181059 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:09:17.181069 | orchestrator | testbed-node-0 : ok=33  changed=24  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 01:09:17.181079 | orchestrator | testbed-node-1 : ok=24  changed=17  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 01:09:17.181095 | orchestrator | testbed-node-2 : ok=24  changed=17  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 01:09:17.181105 | orchestrator | 2026-04-05 01:09:17.181114 | orchestrator | 2026-04-05 01:09:17.181130 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:09:17.181140 | orchestrator | Sunday 05 April 2026 01:09:15 +0000 (0:00:00.506) 0:03:21.889 ********** 2026-04-05 01:09:17.181149 | orchestrator | =============================================================================== 2026-04-05 01:09:17.181159 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 29.63s 2026-04-05 01:09:17.181169 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.09s 2026-04-05 01:09:17.181178 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.09s 2026-04-05 01:09:17.181188 | orchestrator | service-ks-register : cinder | Creating/deleting endpoints ------------- 14.26s 2026-04-05 01:09:17.181197 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.93s 2026-04-05 01:09:17.181207 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.05s 2026-04-05 01:09:17.181216 | orchestrator | service-ks-register : cinder | Creating/deleting services --------------- 8.11s 2026-04-05 01:09:17.181226 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.11s 2026-04-05 01:09:17.181235 | orchestrator | service-ks-register : cinder | Granting/revoking user roles ------------- 7.84s 2026-04-05 01:09:17.181245 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.05s 2026-04-05 01:09:17.181254 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.52s 2026-04-05 01:09:17.181264 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.88s 2026-04-05 01:09:17.181274 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.09s 2026-04-05 01:09:17.181283 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.72s 2026-04-05 01:09:17.181293 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.71s 2026-04-05 01:09:17.181302 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.47s 2026-04-05 01:09:17.181312 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.27s 2026-04-05 01:09:17.181321 | orchestrator | service-check-containers : cinder | Check containers -------------------- 3.25s 2026-04-05 01:09:17.181330 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.24s 2026-04-05 01:09:17.181345 | orchestrator | service-uwsgi-config : Copying over cinder-api uWSGI config ------------- 2.80s 2026-04-05 01:09:20.217938 | orchestrator | 2026-04-05 01:09:20 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:20.218236 | orchestrator | 2026-04-05 01:09:20 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:20.219127 | orchestrator | 2026-04-05 01:09:20 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:20.219854 | orchestrator | 2026-04-05 01:09:20 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:20.220150 | orchestrator | 2026-04-05 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:23.259511 | orchestrator | 2026-04-05 01:09:23 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:23.259972 | orchestrator | 2026-04-05 01:09:23 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:23.260948 | orchestrator | 2026-04-05 01:09:23 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:23.261890 | orchestrator | 2026-04-05 01:09:23 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:23.261924 | orchestrator | 2026-04-05 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:26.294653 | orchestrator | 2026-04-05 01:09:26 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:26.295744 | orchestrator | 2026-04-05 01:09:26 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:26.296593 | orchestrator | 2026-04-05 01:09:26 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:26.297503 | orchestrator | 2026-04-05 01:09:26 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:26.297589 | orchestrator | 2026-04-05 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:29.340205 | orchestrator | 2026-04-05 01:09:29 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:29.343360 | orchestrator | 2026-04-05 01:09:29 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:29.345722 | orchestrator | 2026-04-05 01:09:29 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:29.346519 | orchestrator | 2026-04-05 01:09:29 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:29.346601 | orchestrator | 2026-04-05 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:32.394435 | orchestrator | 2026-04-05 01:09:32 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:32.395066 | orchestrator | 2026-04-05 01:09:32 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:32.396147 | orchestrator | 2026-04-05 01:09:32 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:32.396975 | orchestrator | 2026-04-05 01:09:32 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:32.397015 | orchestrator | 2026-04-05 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:35.440615 | orchestrator | 2026-04-05 01:09:35 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:35.442320 | orchestrator | 2026-04-05 01:09:35 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:35.444163 | orchestrator | 2026-04-05 01:09:35 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:35.446002 | orchestrator | 2026-04-05 01:09:35 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:35.446095 | orchestrator | 2026-04-05 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:38.480585 | orchestrator | 2026-04-05 01:09:38 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:38.481024 | orchestrator | 2026-04-05 01:09:38 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:38.482181 | orchestrator | 2026-04-05 01:09:38 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:38.482908 | orchestrator | 2026-04-05 01:09:38 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:38.482942 | orchestrator | 2026-04-05 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:41.520610 | orchestrator | 2026-04-05 01:09:41 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:41.522086 | orchestrator | 2026-04-05 01:09:41 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:41.524254 | orchestrator | 2026-04-05 01:09:41 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:41.526119 | orchestrator | 2026-04-05 01:09:41 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:41.526314 | orchestrator | 2026-04-05 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:44.568503 | orchestrator | 2026-04-05 01:09:44 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:44.569045 | orchestrator | 2026-04-05 01:09:44 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:44.570298 | orchestrator | 2026-04-05 01:09:44 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:44.571533 | orchestrator | 2026-04-05 01:09:44 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:44.571592 | orchestrator | 2026-04-05 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:47.622257 | orchestrator | 2026-04-05 01:09:47 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:47.624624 | orchestrator | 2026-04-05 01:09:47 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:47.626095 | orchestrator | 2026-04-05 01:09:47 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:47.628494 | orchestrator | 2026-04-05 01:09:47 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:47.628684 | orchestrator | 2026-04-05 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:50.679670 | orchestrator | 2026-04-05 01:09:50 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:50.680304 | orchestrator | 2026-04-05 01:09:50 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:50.682621 | orchestrator | 2026-04-05 01:09:50 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:50.685578 | orchestrator | 2026-04-05 01:09:50 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:50.685642 | orchestrator | 2026-04-05 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:53.718088 | orchestrator | 2026-04-05 01:09:53 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:53.718576 | orchestrator | 2026-04-05 01:09:53 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:53.719523 | orchestrator | 2026-04-05 01:09:53 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:53.720468 | orchestrator | 2026-04-05 01:09:53 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:53.720499 | orchestrator | 2026-04-05 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:56.769015 | orchestrator | 2026-04-05 01:09:56 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:56.772946 | orchestrator | 2026-04-05 01:09:56 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:56.773483 | orchestrator | 2026-04-05 01:09:56 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:56.775492 | orchestrator | 2026-04-05 01:09:56 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:56.775637 | orchestrator | 2026-04-05 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:59.827346 | orchestrator | 2026-04-05 01:09:59 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:09:59.827529 | orchestrator | 2026-04-05 01:09:59 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:09:59.828741 | orchestrator | 2026-04-05 01:09:59 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:09:59.829293 | orchestrator | 2026-04-05 01:09:59 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:09:59.829312 | orchestrator | 2026-04-05 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:02.878916 | orchestrator | 2026-04-05 01:10:02 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:02.881083 | orchestrator | 2026-04-05 01:10:02 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:02.884736 | orchestrator | 2026-04-05 01:10:02 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:02.884823 | orchestrator | 2026-04-05 01:10:02 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:02.884845 | orchestrator | 2026-04-05 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:05.938182 | orchestrator | 2026-04-05 01:10:05 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:05.938353 | orchestrator | 2026-04-05 01:10:05 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:05.940934 | orchestrator | 2026-04-05 01:10:05 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:05.941484 | orchestrator | 2026-04-05 01:10:05 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:05.941516 | orchestrator | 2026-04-05 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:08.978094 | orchestrator | 2026-04-05 01:10:08 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:08.980272 | orchestrator | 2026-04-05 01:10:08 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:08.984154 | orchestrator | 2026-04-05 01:10:08 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:08.986153 | orchestrator | 2026-04-05 01:10:08 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:08.986218 | orchestrator | 2026-04-05 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:12.056336 | orchestrator | 2026-04-05 01:10:12 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:12.056494 | orchestrator | 2026-04-05 01:10:12 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:12.056508 | orchestrator | 2026-04-05 01:10:12 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:12.056515 | orchestrator | 2026-04-05 01:10:12 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:12.056522 | orchestrator | 2026-04-05 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:15.113914 | orchestrator | 2026-04-05 01:10:15 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:15.114157 | orchestrator | 2026-04-05 01:10:15 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:15.116375 | orchestrator | 2026-04-05 01:10:15 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:15.118557 | orchestrator | 2026-04-05 01:10:15 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:15.118613 | orchestrator | 2026-04-05 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:18.152637 | orchestrator | 2026-04-05 01:10:18 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:18.152772 | orchestrator | 2026-04-05 01:10:18 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:18.154362 | orchestrator | 2026-04-05 01:10:18 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:18.155631 | orchestrator | 2026-04-05 01:10:18 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:18.155705 | orchestrator | 2026-04-05 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:21.202369 | orchestrator | 2026-04-05 01:10:21 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:21.202595 | orchestrator | 2026-04-05 01:10:21 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:21.203449 | orchestrator | 2026-04-05 01:10:21 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:21.204240 | orchestrator | 2026-04-05 01:10:21 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:21.204260 | orchestrator | 2026-04-05 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:24.232868 | orchestrator | 2026-04-05 01:10:24 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:24.233004 | orchestrator | 2026-04-05 01:10:24 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:24.233852 | orchestrator | 2026-04-05 01:10:24 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:24.235321 | orchestrator | 2026-04-05 01:10:24 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:24.235376 | orchestrator | 2026-04-05 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:27.266302 | orchestrator | 2026-04-05 01:10:27 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:27.267216 | orchestrator | 2026-04-05 01:10:27 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:27.268696 | orchestrator | 2026-04-05 01:10:27 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:27.269481 | orchestrator | 2026-04-05 01:10:27 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:27.269497 | orchestrator | 2026-04-05 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:30.318247 | orchestrator | 2026-04-05 01:10:30 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:30.319010 | orchestrator | 2026-04-05 01:10:30 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:30.320102 | orchestrator | 2026-04-05 01:10:30 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:30.323605 | orchestrator | 2026-04-05 01:10:30 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:30.323671 | orchestrator | 2026-04-05 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:33.400156 | orchestrator | 2026-04-05 01:10:33 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:33.400450 | orchestrator | 2026-04-05 01:10:33 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:33.401704 | orchestrator | 2026-04-05 01:10:33 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:33.402182 | orchestrator | 2026-04-05 01:10:33 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:33.402223 | orchestrator | 2026-04-05 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:36.444584 | orchestrator | 2026-04-05 01:10:36 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:36.445204 | orchestrator | 2026-04-05 01:10:36 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:36.446623 | orchestrator | 2026-04-05 01:10:36 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:36.449220 | orchestrator | 2026-04-05 01:10:36 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:36.449275 | orchestrator | 2026-04-05 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:39.481981 | orchestrator | 2026-04-05 01:10:39 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:39.482334 | orchestrator | 2026-04-05 01:10:39 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:39.483977 | orchestrator | 2026-04-05 01:10:39 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:39.484200 | orchestrator | 2026-04-05 01:10:39 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:39.484224 | orchestrator | 2026-04-05 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:42.516619 | orchestrator | 2026-04-05 01:10:42 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:42.518248 | orchestrator | 2026-04-05 01:10:42 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:42.518775 | orchestrator | 2026-04-05 01:10:42 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:42.519747 | orchestrator | 2026-04-05 01:10:42 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:42.519777 | orchestrator | 2026-04-05 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:45.581123 | orchestrator | 2026-04-05 01:10:45 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:45.581933 | orchestrator | 2026-04-05 01:10:45 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:45.583895 | orchestrator | 2026-04-05 01:10:45 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:45.585129 | orchestrator | 2026-04-05 01:10:45 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:45.585197 | orchestrator | 2026-04-05 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:48.625460 | orchestrator | 2026-04-05 01:10:48 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:48.627669 | orchestrator | 2026-04-05 01:10:48 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:48.630173 | orchestrator | 2026-04-05 01:10:48 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:48.631718 | orchestrator | 2026-04-05 01:10:48 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:48.631829 | orchestrator | 2026-04-05 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:51.666795 | orchestrator | 2026-04-05 01:10:51 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:51.666895 | orchestrator | 2026-04-05 01:10:51 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:51.669055 | orchestrator | 2026-04-05 01:10:51 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:51.671442 | orchestrator | 2026-04-05 01:10:51 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:51.671501 | orchestrator | 2026-04-05 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:54.713260 | orchestrator | 2026-04-05 01:10:54 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:54.713907 | orchestrator | 2026-04-05 01:10:54 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state STARTED 2026-04-05 01:10:54.714387 | orchestrator | 2026-04-05 01:10:54 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:54.715484 | orchestrator | 2026-04-05 01:10:54 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:54.715511 | orchestrator | 2026-04-05 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:57.756929 | orchestrator | 2026-04-05 01:10:57 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:10:57.758637 | orchestrator | 2026-04-05 01:10:57 | INFO  | Task e63a9692-007a-427d-ba16-7bf26e46c2c2 is in state SUCCESS 2026-04-05 01:10:57.759736 | orchestrator | 2026-04-05 01:10:57.759778 | orchestrator | 2026-04-05 01:10:57.759792 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:10:57.759804 | orchestrator | 2026-04-05 01:10:57.759816 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:10:57.759828 | orchestrator | Sunday 05 April 2026 01:08:46 +0000 (0:00:00.405) 0:00:00.405 ********** 2026-04-05 01:10:57.759840 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:10:57.759852 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:10:57.759863 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:10:57.759874 | orchestrator | 2026-04-05 01:10:57.759886 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:10:57.759897 | orchestrator | Sunday 05 April 2026 01:08:46 +0000 (0:00:00.326) 0:00:00.732 ********** 2026-04-05 01:10:57.759908 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-05 01:10:57.759920 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-05 01:10:57.759931 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-05 01:10:57.759942 | orchestrator | 2026-04-05 01:10:57.759953 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-05 01:10:57.759964 | orchestrator | 2026-04-05 01:10:57.759975 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 01:10:57.759985 | orchestrator | Sunday 05 April 2026 01:08:46 +0000 (0:00:00.326) 0:00:01.059 ********** 2026-04-05 01:10:57.759996 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:10:57.760009 | orchestrator | 2026-04-05 01:10:57.760020 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-04-05 01:10:57.760030 | orchestrator | Sunday 05 April 2026 01:08:48 +0000 (0:00:01.361) 0:00:02.420 ********** 2026-04-05 01:10:57.760042 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-05 01:10:57.760053 | orchestrator | 2026-04-05 01:10:57.760064 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting endpoints] ************ 2026-04-05 01:10:57.760075 | orchestrator | Sunday 05 April 2026 01:08:52 +0000 (0:00:04.377) 0:00:06.797 ********** 2026-04-05 01:10:57.760086 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-05 01:10:57.760097 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-05 01:10:57.760108 | orchestrator | 2026-04-05 01:10:57.760119 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-05 01:10:57.760145 | orchestrator | Sunday 05 April 2026 01:08:59 +0000 (0:00:07.237) 0:00:14.034 ********** 2026-04-05 01:10:57.760192 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:10:57.760213 | orchestrator | 2026-04-05 01:10:57.760231 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-05 01:10:57.760247 | orchestrator | Sunday 05 April 2026 01:09:03 +0000 (0:00:03.726) 0:00:17.761 ********** 2026-04-05 01:10:57.760264 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-05 01:10:57.760280 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:10:57.760296 | orchestrator | 2026-04-05 01:10:57.760592 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-05 01:10:57.760625 | orchestrator | Sunday 05 April 2026 01:09:08 +0000 (0:00:04.569) 0:00:22.331 ********** 2026-04-05 01:10:57.760640 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:10:57.760653 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-05 01:10:57.760664 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-05 01:10:57.760675 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-05 01:10:57.760686 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-05 01:10:57.760698 | orchestrator | 2026-04-05 01:10:57.760717 | orchestrator | TASK [service-ks-register : barbican | Granting/revoking user roles] *********** 2026-04-05 01:10:57.760737 | orchestrator | Sunday 05 April 2026 01:09:26 +0000 (0:00:18.410) 0:00:40.743 ********** 2026-04-05 01:10:57.760756 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-05 01:10:57.760776 | orchestrator | 2026-04-05 01:10:57.760795 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-05 01:10:57.760814 | orchestrator | Sunday 05 April 2026 01:09:31 +0000 (0:00:04.893) 0:00:45.636 ********** 2026-04-05 01:10:57.760839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.760879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.760895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.760933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.760948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.760961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.760981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.760993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.761012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.761023 | orchestrator | 2026-04-05 01:10:57.761040 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-05 01:10:57.761051 | orchestrator | Sunday 05 April 2026 01:09:34 +0000 (0:00:03.059) 0:00:48.696 ********** 2026-04-05 01:10:57.761063 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-05 01:10:57.761074 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-05 01:10:57.761085 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-05 01:10:57.761096 | orchestrator | 2026-04-05 01:10:57.761107 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-05 01:10:57.761117 | orchestrator | Sunday 05 April 2026 01:09:37 +0000 (0:00:02.448) 0:00:51.144 ********** 2026-04-05 01:10:57.761128 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:57.761139 | orchestrator | 2026-04-05 01:10:57.761154 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-05 01:10:57.761177 | orchestrator | Sunday 05 April 2026 01:09:37 +0000 (0:00:00.158) 0:00:51.302 ********** 2026-04-05 01:10:57.761205 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:57.761223 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:57.761240 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:57.761258 | orchestrator | 2026-04-05 01:10:57.761274 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 01:10:57.761292 | orchestrator | Sunday 05 April 2026 01:09:37 +0000 (0:00:00.337) 0:00:51.639 ********** 2026-04-05 01:10:57.761310 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:10:57.761328 | orchestrator | 2026-04-05 01:10:57.761347 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-05 01:10:57.761365 | orchestrator | Sunday 05 April 2026 01:09:38 +0000 (0:00:00.814) 0:00:52.454 ********** 2026-04-05 01:10:57.761387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.761474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.761512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.761526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.761537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.761549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.761568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.761588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.761600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.761611 | orchestrator | 2026-04-05 01:10:57.761622 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-05 01:10:57.761633 | orchestrator | Sunday 05 April 2026 01:09:42 +0000 (0:00:04.544) 0:00:56.999 ********** 2026-04-05 01:10:57.761646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.761658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.761790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.761827 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:57.761840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.761858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.761870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.761882 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:57.761894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.761914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.761933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.761945 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:57.761956 | orchestrator | 2026-04-05 01:10:57.761967 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-05 01:10:57.761978 | orchestrator | Sunday 05 April 2026 01:09:43 +0000 (0:00:00.603) 0:00:57.603 ********** 2026-04-05 01:10:57.761995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.762008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.762083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.762095 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:57.762116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.762138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.762150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.762161 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:57.762177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.762190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.762201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.762220 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:57.762231 | orchestrator | 2026-04-05 01:10:57.762242 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-05 01:10:57.762253 | orchestrator | Sunday 05 April 2026 01:09:44 +0000 (0:00:00.827) 0:00:58.430 ********** 2026-04-05 01:10:57.762968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.763104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.763147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.763169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763365 | orchestrator | 2026-04-05 01:10:57.763385 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-05 01:10:57.763405 | orchestrator | Sunday 05 April 2026 01:09:48 +0000 (0:00:03.667) 0:01:02.098 ********** 2026-04-05 01:10:57.763475 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:57.763497 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:10:57.763518 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:10:57.763541 | orchestrator | 2026-04-05 01:10:57.763562 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-05 01:10:57.763579 | orchestrator | Sunday 05 April 2026 01:09:49 +0000 (0:00:01.535) 0:01:03.633 ********** 2026-04-05 01:10:57.763592 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:10:57.763605 | orchestrator | 2026-04-05 01:10:57.763617 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-05 01:10:57.763631 | orchestrator | Sunday 05 April 2026 01:09:50 +0000 (0:00:01.001) 0:01:04.635 ********** 2026-04-05 01:10:57.763642 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:57.763652 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:57.763663 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:57.763674 | orchestrator | 2026-04-05 01:10:57.763685 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-05 01:10:57.763699 | orchestrator | Sunday 05 April 2026 01:09:51 +0000 (0:00:00.541) 0:01:05.176 ********** 2026-04-05 01:10:57.763737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.763760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.763788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.763813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.763909 | orchestrator | 2026-04-05 01:10:57.763921 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-05 01:10:57.763933 | orchestrator | Sunday 05 April 2026 01:09:57 +0000 (0:00:06.587) 0:01:11.764 ********** 2026-04-05 01:10:57.763945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.763964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.763976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.763987 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:57.763999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.764017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.764037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.764048 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:57.764060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.764080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.764092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.764103 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:57.764115 | orchestrator | 2026-04-05 01:10:57.764126 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-04-05 01:10:57.764138 | orchestrator | Sunday 05 April 2026 01:09:58 +0000 (0:00:00.884) 0:01:12.648 ********** 2026-04-05 01:10:57.764154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.764175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.764195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:10:57.764208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.764219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.764242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.764264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.764284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.764312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:10:57.764333 | orchestrator | 2026-04-05 01:10:57.764351 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-04-05 01:10:57.764370 | orchestrator | Sunday 05 April 2026 01:10:01 +0000 (0:00:03.077) 0:01:15.726 ********** 2026-04-05 01:10:57.764391 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:10:57.764485 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:10:57.764505 | orchestrator | } 2026-04-05 01:10:57.764517 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:10:57.764528 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:10:57.764539 | orchestrator | } 2026-04-05 01:10:57.764550 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:10:57.764561 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:10:57.764572 | orchestrator | } 2026-04-05 01:10:57.764583 | orchestrator | 2026-04-05 01:10:57.764595 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:10:57.764606 | orchestrator | Sunday 05 April 2026 01:10:02 +0000 (0:00:00.517) 0:01:16.243 ********** 2026-04-05 01:10:57.764618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.764657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.764670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.764682 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:57.764703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.764716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.764736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.764748 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:57.764764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:10:57.764777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.764788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:10:57.764800 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:57.764811 | orchestrator | 2026-04-05 01:10:57.764823 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 01:10:57.764834 | orchestrator | Sunday 05 April 2026 01:10:03 +0000 (0:00:01.201) 0:01:17.444 ********** 2026-04-05 01:10:57.764845 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:57.764856 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:57.764867 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:57.764878 | orchestrator | 2026-04-05 01:10:57.764889 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-05 01:10:57.764909 | orchestrator | Sunday 05 April 2026 01:10:03 +0000 (0:00:00.297) 0:01:17.742 ********** 2026-04-05 01:10:57.764920 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:57.764930 | orchestrator | 2026-04-05 01:10:57.764941 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-05 01:10:57.764952 | orchestrator | Sunday 05 April 2026 01:10:05 +0000 (0:00:02.283) 0:01:20.026 ********** 2026-04-05 01:10:57.764970 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:57.764982 | orchestrator | 2026-04-05 01:10:57.764994 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-05 01:10:57.765004 | orchestrator | Sunday 05 April 2026 01:10:07 +0000 (0:00:02.027) 0:01:22.053 ********** 2026-04-05 01:10:57.765015 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:57.765025 | orchestrator | 2026-04-05 01:10:57.765034 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 01:10:57.765044 | orchestrator | Sunday 05 April 2026 01:10:21 +0000 (0:00:13.945) 0:01:35.998 ********** 2026-04-05 01:10:57.765054 | orchestrator | 2026-04-05 01:10:57.765063 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 01:10:57.765073 | orchestrator | Sunday 05 April 2026 01:10:22 +0000 (0:00:00.168) 0:01:36.167 ********** 2026-04-05 01:10:57.765083 | orchestrator | 2026-04-05 01:10:57.765093 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 01:10:57.765104 | orchestrator | Sunday 05 April 2026 01:10:22 +0000 (0:00:00.232) 0:01:36.399 ********** 2026-04-05 01:10:57.765121 | orchestrator | 2026-04-05 01:10:57.765148 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-05 01:10:57.765165 | orchestrator | Sunday 05 April 2026 01:10:22 +0000 (0:00:00.125) 0:01:36.525 ********** 2026-04-05 01:10:57.765180 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:57.765197 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:10:57.765213 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:10:57.765230 | orchestrator | 2026-04-05 01:10:57.765245 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-05 01:10:57.765260 | orchestrator | Sunday 05 April 2026 01:10:35 +0000 (0:00:12.816) 0:01:49.342 ********** 2026-04-05 01:10:57.765276 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:57.765293 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:10:57.765309 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:10:57.765327 | orchestrator | 2026-04-05 01:10:57.765344 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-05 01:10:57.765361 | orchestrator | Sunday 05 April 2026 01:10:42 +0000 (0:00:07.689) 0:01:57.032 ********** 2026-04-05 01:10:57.765376 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:57.765385 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:10:57.765395 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:10:57.765405 | orchestrator | 2026-04-05 01:10:57.765489 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:10:57.765542 | orchestrator | testbed-node-0 : ok=25  changed=19  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-05 01:10:57.765562 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:10:57.765579 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:10:57.765590 | orchestrator | 2026-04-05 01:10:57.765600 | orchestrator | 2026-04-05 01:10:57.765610 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:10:57.765619 | orchestrator | Sunday 05 April 2026 01:10:55 +0000 (0:00:12.916) 0:02:09.949 ********** 2026-04-05 01:10:57.765629 | orchestrator | =============================================================================== 2026-04-05 01:10:57.765639 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.41s 2026-04-05 01:10:57.765651 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.95s 2026-04-05 01:10:57.765672 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.92s 2026-04-05 01:10:57.765697 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.82s 2026-04-05 01:10:57.765713 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.69s 2026-04-05 01:10:57.765743 | orchestrator | service-ks-register : barbican | Creating/deleting endpoints ------------ 7.24s 2026-04-05 01:10:57.765758 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.59s 2026-04-05 01:10:57.765773 | orchestrator | service-ks-register : barbican | Granting/revoking user roles ----------- 4.89s 2026-04-05 01:10:57.765790 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.57s 2026-04-05 01:10:57.765805 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.54s 2026-04-05 01:10:57.765820 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------- 4.38s 2026-04-05 01:10:57.765837 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.73s 2026-04-05 01:10:57.765854 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.67s 2026-04-05 01:10:57.765871 | orchestrator | service-check-containers : barbican | Check containers ------------------ 3.08s 2026-04-05 01:10:57.765887 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.06s 2026-04-05 01:10:57.765904 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.45s 2026-04-05 01:10:57.765920 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.28s 2026-04-05 01:10:57.765937 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.03s 2026-04-05 01:10:57.765950 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.54s 2026-04-05 01:10:57.765973 | orchestrator | barbican : include_tasks ------------------------------------------------ 1.36s 2026-04-05 01:10:57.765984 | orchestrator | 2026-04-05 01:10:57 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:10:57.765995 | orchestrator | 2026-04-05 01:10:57 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:10:57.766005 | orchestrator | 2026-04-05 01:10:57 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:10:57.766073 | orchestrator | 2026-04-05 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:00.799241 | orchestrator | 2026-04-05 01:11:00 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:00.800910 | orchestrator | 2026-04-05 01:11:00 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:00.802614 | orchestrator | 2026-04-05 01:11:00 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:00.803958 | orchestrator | 2026-04-05 01:11:00 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:00.804026 | orchestrator | 2026-04-05 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:03.863819 | orchestrator | 2026-04-05 01:11:03 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:03.865661 | orchestrator | 2026-04-05 01:11:03 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:03.867357 | orchestrator | 2026-04-05 01:11:03 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:03.868895 | orchestrator | 2026-04-05 01:11:03 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:03.868935 | orchestrator | 2026-04-05 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:06.915854 | orchestrator | 2026-04-05 01:11:06 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:06.916900 | orchestrator | 2026-04-05 01:11:06 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:06.917948 | orchestrator | 2026-04-05 01:11:06 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:06.919205 | orchestrator | 2026-04-05 01:11:06 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:06.920412 | orchestrator | 2026-04-05 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:09.983878 | orchestrator | 2026-04-05 01:11:09 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:09.986241 | orchestrator | 2026-04-05 01:11:09 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:09.997293 | orchestrator | 2026-04-05 01:11:09 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:10.001709 | orchestrator | 2026-04-05 01:11:09 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:10.001789 | orchestrator | 2026-04-05 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:13.062205 | orchestrator | 2026-04-05 01:11:13 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:13.064186 | orchestrator | 2026-04-05 01:11:13 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:13.066951 | orchestrator | 2026-04-05 01:11:13 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:13.069642 | orchestrator | 2026-04-05 01:11:13 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:13.069716 | orchestrator | 2026-04-05 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:16.124192 | orchestrator | 2026-04-05 01:11:16 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:16.125807 | orchestrator | 2026-04-05 01:11:16 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:16.128206 | orchestrator | 2026-04-05 01:11:16 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:16.130592 | orchestrator | 2026-04-05 01:11:16 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:16.130619 | orchestrator | 2026-04-05 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:19.185535 | orchestrator | 2026-04-05 01:11:19 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:19.187099 | orchestrator | 2026-04-05 01:11:19 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:19.188478 | orchestrator | 2026-04-05 01:11:19 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:19.190384 | orchestrator | 2026-04-05 01:11:19 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:19.190511 | orchestrator | 2026-04-05 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:22.232140 | orchestrator | 2026-04-05 01:11:22 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:22.232267 | orchestrator | 2026-04-05 01:11:22 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:22.232885 | orchestrator | 2026-04-05 01:11:22 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:22.234470 | orchestrator | 2026-04-05 01:11:22 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:22.234502 | orchestrator | 2026-04-05 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:25.271680 | orchestrator | 2026-04-05 01:11:25 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:25.273756 | orchestrator | 2026-04-05 01:11:25 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:25.276286 | orchestrator | 2026-04-05 01:11:25 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:25.278945 | orchestrator | 2026-04-05 01:11:25 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:25.279011 | orchestrator | 2026-04-05 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:28.333701 | orchestrator | 2026-04-05 01:11:28 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:28.336047 | orchestrator | 2026-04-05 01:11:28 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:28.339001 | orchestrator | 2026-04-05 01:11:28 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:28.340764 | orchestrator | 2026-04-05 01:11:28 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:28.340792 | orchestrator | 2026-04-05 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:31.386984 | orchestrator | 2026-04-05 01:11:31 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:31.388933 | orchestrator | 2026-04-05 01:11:31 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:31.390464 | orchestrator | 2026-04-05 01:11:31 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:31.392750 | orchestrator | 2026-04-05 01:11:31 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:31.392795 | orchestrator | 2026-04-05 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:34.437023 | orchestrator | 2026-04-05 01:11:34 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:34.437412 | orchestrator | 2026-04-05 01:11:34 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:34.438257 | orchestrator | 2026-04-05 01:11:34 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:34.439493 | orchestrator | 2026-04-05 01:11:34 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:34.439641 | orchestrator | 2026-04-05 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:37.474401 | orchestrator | 2026-04-05 01:11:37 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:37.474628 | orchestrator | 2026-04-05 01:11:37 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:37.474659 | orchestrator | 2026-04-05 01:11:37 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:37.474682 | orchestrator | 2026-04-05 01:11:37 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:37.474703 | orchestrator | 2026-04-05 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:40.536991 | orchestrator | 2026-04-05 01:11:40 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:40.538580 | orchestrator | 2026-04-05 01:11:40 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:40.540363 | orchestrator | 2026-04-05 01:11:40 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:40.542163 | orchestrator | 2026-04-05 01:11:40 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:40.542188 | orchestrator | 2026-04-05 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:43.583404 | orchestrator | 2026-04-05 01:11:43 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:43.586328 | orchestrator | 2026-04-05 01:11:43 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:43.589093 | orchestrator | 2026-04-05 01:11:43 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:43.591096 | orchestrator | 2026-04-05 01:11:43 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:43.591178 | orchestrator | 2026-04-05 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:46.643911 | orchestrator | 2026-04-05 01:11:46 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:46.645290 | orchestrator | 2026-04-05 01:11:46 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:46.646768 | orchestrator | 2026-04-05 01:11:46 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:46.651114 | orchestrator | 2026-04-05 01:11:46 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:46.651179 | orchestrator | 2026-04-05 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:49.703387 | orchestrator | 2026-04-05 01:11:49 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:49.705357 | orchestrator | 2026-04-05 01:11:49 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:49.706586 | orchestrator | 2026-04-05 01:11:49 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:49.708148 | orchestrator | 2026-04-05 01:11:49 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:49.708178 | orchestrator | 2026-04-05 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:52.765796 | orchestrator | 2026-04-05 01:11:52 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:52.770547 | orchestrator | 2026-04-05 01:11:52 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:52.773923 | orchestrator | 2026-04-05 01:11:52 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:52.777518 | orchestrator | 2026-04-05 01:11:52 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:52.778117 | orchestrator | 2026-04-05 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:55.823974 | orchestrator | 2026-04-05 01:11:55 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:55.827684 | orchestrator | 2026-04-05 01:11:55 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:55.830326 | orchestrator | 2026-04-05 01:11:55 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:55.832244 | orchestrator | 2026-04-05 01:11:55 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state STARTED 2026-04-05 01:11:55.832600 | orchestrator | 2026-04-05 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:58.877743 | orchestrator | 2026-04-05 01:11:58 | INFO  | Task fa06b22c-159d-4d9f-8a90-4c744620184f is in state STARTED 2026-04-05 01:11:58.878247 | orchestrator | 2026-04-05 01:11:58 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:11:58.879296 | orchestrator | 2026-04-05 01:11:58 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:11:58.880304 | orchestrator | 2026-04-05 01:11:58 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:11:58.882761 | orchestrator | 2026-04-05 01:11:58.882864 | orchestrator | 2026-04-05 01:11:58 | INFO  | Task 2b195116-5e97-483a-acf3-e3eff2cfdba4 is in state SUCCESS 2026-04-05 01:11:58.884615 | orchestrator | 2026-04-05 01:11:58.884783 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:11:58.884799 | orchestrator | 2026-04-05 01:11:58.884811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:11:58.884822 | orchestrator | Sunday 05 April 2026 01:07:14 +0000 (0:00:00.315) 0:00:00.315 ********** 2026-04-05 01:11:58.884834 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:11:58.884845 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:11:58.884856 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:11:58.884867 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:11:58.884877 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:11:58.884888 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:11:58.884899 | orchestrator | 2026-04-05 01:11:58.884909 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:11:58.884920 | orchestrator | Sunday 05 April 2026 01:07:15 +0000 (0:00:00.561) 0:00:00.877 ********** 2026-04-05 01:11:58.884931 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-05 01:11:58.884942 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-05 01:11:58.884952 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-05 01:11:58.884963 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-05 01:11:58.884974 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-05 01:11:58.884985 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-05 01:11:58.884996 | orchestrator | 2026-04-05 01:11:58.885007 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-05 01:11:58.885018 | orchestrator | 2026-04-05 01:11:58.885028 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 01:11:58.885039 | orchestrator | Sunday 05 April 2026 01:07:16 +0000 (0:00:00.645) 0:00:01.522 ********** 2026-04-05 01:11:58.885051 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:11:58.885063 | orchestrator | 2026-04-05 01:11:58.885074 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-05 01:11:58.885085 | orchestrator | Sunday 05 April 2026 01:07:17 +0000 (0:00:01.542) 0:00:03.064 ********** 2026-04-05 01:11:58.885096 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:11:58.885109 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:11:58.885121 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:11:58.885135 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:11:58.885146 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:11:58.885159 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:11:58.885170 | orchestrator | 2026-04-05 01:11:58.885183 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-05 01:11:58.885196 | orchestrator | Sunday 05 April 2026 01:07:19 +0000 (0:00:01.897) 0:00:04.962 ********** 2026-04-05 01:11:58.885209 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:11:58.885220 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:11:58.885230 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:11:58.885257 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:11:58.885268 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:11:58.885279 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:11:58.885289 | orchestrator | 2026-04-05 01:11:58.885300 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-05 01:11:58.885311 | orchestrator | Sunday 05 April 2026 01:07:20 +0000 (0:00:01.154) 0:00:06.117 ********** 2026-04-05 01:11:58.885322 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 01:11:58.885333 | orchestrator |  "changed": false, 2026-04-05 01:11:58.885365 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:11:58.885376 | orchestrator | } 2026-04-05 01:11:58.885387 | orchestrator | ok: [testbed-node-1] => { 2026-04-05 01:11:58.885397 | orchestrator |  "changed": false, 2026-04-05 01:11:58.885408 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:11:58.885419 | orchestrator | } 2026-04-05 01:11:58.885429 | orchestrator | ok: [testbed-node-2] => { 2026-04-05 01:11:58.885505 | orchestrator |  "changed": false, 2026-04-05 01:11:58.885523 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:11:58.885542 | orchestrator | } 2026-04-05 01:11:58.885560 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 01:11:58.885578 | orchestrator |  "changed": false, 2026-04-05 01:11:58.885593 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:11:58.885604 | orchestrator | } 2026-04-05 01:11:58.885615 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 01:11:58.885626 | orchestrator |  "changed": false, 2026-04-05 01:11:58.885636 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:11:58.885647 | orchestrator | } 2026-04-05 01:11:58.885657 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 01:11:58.885668 | orchestrator |  "changed": false, 2026-04-05 01:11:58.885679 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:11:58.885690 | orchestrator | } 2026-04-05 01:11:58.885701 | orchestrator | 2026-04-05 01:11:58.885712 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-05 01:11:58.885723 | orchestrator | Sunday 05 April 2026 01:07:21 +0000 (0:00:00.576) 0:00:06.693 ********** 2026-04-05 01:11:58.885733 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.885744 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.885755 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.885765 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.885776 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.885788 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.885806 | orchestrator | 2026-04-05 01:11:58.885824 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-04-05 01:11:58.885841 | orchestrator | Sunday 05 April 2026 01:07:21 +0000 (0:00:00.635) 0:00:07.328 ********** 2026-04-05 01:11:58.885859 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-05 01:11:58.885876 | orchestrator | 2026-04-05 01:11:58.885894 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting endpoints] ************* 2026-04-05 01:11:58.885912 | orchestrator | Sunday 05 April 2026 01:07:25 +0000 (0:00:03.634) 0:00:10.963 ********** 2026-04-05 01:11:58.885932 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-05 01:11:58.885952 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-05 01:11:58.885970 | orchestrator | 2026-04-05 01:11:58.885999 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-05 01:11:58.886086 | orchestrator | Sunday 05 April 2026 01:07:32 +0000 (0:00:07.117) 0:00:18.081 ********** 2026-04-05 01:11:58.886112 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:11:58.886132 | orchestrator | 2026-04-05 01:11:58.886151 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-05 01:11:58.886169 | orchestrator | Sunday 05 April 2026 01:07:36 +0000 (0:00:03.684) 0:00:21.765 ********** 2026-04-05 01:11:58.886188 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-05 01:11:58.886207 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:11:58.886225 | orchestrator | 2026-04-05 01:11:58.886243 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-05 01:11:58.886263 | orchestrator | Sunday 05 April 2026 01:07:40 +0000 (0:00:04.222) 0:00:25.988 ********** 2026-04-05 01:11:58.886282 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:11:58.886302 | orchestrator | 2026-04-05 01:11:58.886319 | orchestrator | TASK [service-ks-register : neutron | Granting/revoking user roles] ************ 2026-04-05 01:11:58.886355 | orchestrator | Sunday 05 April 2026 01:07:44 +0000 (0:00:03.495) 0:00:29.483 ********** 2026-04-05 01:11:58.886374 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-05 01:11:58.886394 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-05 01:11:58.886416 | orchestrator | 2026-04-05 01:11:58.886459 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 01:11:58.886480 | orchestrator | Sunday 05 April 2026 01:07:52 +0000 (0:00:08.281) 0:00:37.765 ********** 2026-04-05 01:11:58.886499 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.886517 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.886536 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.886554 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.886573 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.886592 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.886611 | orchestrator | 2026-04-05 01:11:58.886628 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-05 01:11:58.886647 | orchestrator | Sunday 05 April 2026 01:07:52 +0000 (0:00:00.505) 0:00:38.271 ********** 2026-04-05 01:11:58.886665 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.886683 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.886701 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.886720 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.886738 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.886756 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.886775 | orchestrator | 2026-04-05 01:11:58.886793 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-05 01:11:58.886812 | orchestrator | Sunday 05 April 2026 01:07:54 +0000 (0:00:01.895) 0:00:40.166 ********** 2026-04-05 01:11:58.886831 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:11:58.886850 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:11:58.886867 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:11:58.886896 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:11:58.886915 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:11:58.886933 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:11:58.886952 | orchestrator | 2026-04-05 01:11:58.886970 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-05 01:11:58.886989 | orchestrator | Sunday 05 April 2026 01:07:55 +0000 (0:00:00.889) 0:00:41.055 ********** 2026-04-05 01:11:58.887008 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.887061 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.887079 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.887099 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.887117 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.887135 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.887153 | orchestrator | 2026-04-05 01:11:58.887171 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-05 01:11:58.887190 | orchestrator | Sunday 05 April 2026 01:07:58 +0000 (0:00:02.703) 0:00:43.759 ********** 2026-04-05 01:11:58.887214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.887286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.887313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.887342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.887363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.887383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.887416 | orchestrator | 2026-04-05 01:11:58.887460 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-05 01:11:58.887482 | orchestrator | Sunday 05 April 2026 01:08:02 +0000 (0:00:03.867) 0:00:47.626 ********** 2026-04-05 01:11:58.887501 | orchestrator | [WARNING]: Skipped 2026-04-05 01:11:58.887520 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-05 01:11:58.887551 | orchestrator | due to this access issue: 2026-04-05 01:11:58.887570 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-05 01:11:58.887588 | orchestrator | a directory 2026-04-05 01:11:58.887607 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:11:58.887625 | orchestrator | 2026-04-05 01:11:58.887645 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 01:11:58.887663 | orchestrator | Sunday 05 April 2026 01:08:04 +0000 (0:00:02.156) 0:00:49.783 ********** 2026-04-05 01:11:58.887681 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:11:58.887701 | orchestrator | 2026-04-05 01:11:58.887719 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-05 01:11:58.887738 | orchestrator | Sunday 05 April 2026 01:08:06 +0000 (0:00:01.627) 0:00:51.410 ********** 2026-04-05 01:11:58.887759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.887787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.887808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.887851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.887872 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.887891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.887911 | orchestrator | 2026-04-05 01:11:58.887930 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-05 01:11:58.887956 | orchestrator | Sunday 05 April 2026 01:08:11 +0000 (0:00:05.188) 0:00:56.599 ********** 2026-04-05 01:11:58.887976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.888006 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.888025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.888057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.888077 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.888095 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.888114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.888134 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.888160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.888193 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.888212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.888231 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.888250 | orchestrator | 2026-04-05 01:11:58.888269 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-05 01:11:58.888288 | orchestrator | Sunday 05 April 2026 01:08:13 +0000 (0:00:02.555) 0:00:59.155 ********** 2026-04-05 01:11:58.888318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.888338 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.888358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.888379 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.888414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.888467 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.888489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.888509 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.888538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.888551 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.888563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.888574 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.888585 | orchestrator | 2026-04-05 01:11:58.888596 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-05 01:11:58.888606 | orchestrator | Sunday 05 April 2026 01:08:17 +0000 (0:00:03.979) 0:01:03.134 ********** 2026-04-05 01:11:58.888617 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.888628 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.888638 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.888649 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.888659 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.888670 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.888680 | orchestrator | 2026-04-05 01:11:58.888691 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-05 01:11:58.888702 | orchestrator | Sunday 05 April 2026 01:08:20 +0000 (0:00:03.031) 0:01:06.166 ********** 2026-04-05 01:11:58.888713 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.888723 | orchestrator | 2026-04-05 01:11:58.888741 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-05 01:11:58.888752 | orchestrator | Sunday 05 April 2026 01:08:21 +0000 (0:00:00.234) 0:01:06.400 ********** 2026-04-05 01:11:58.888763 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.888773 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.888784 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.888795 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.888805 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.888816 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.888826 | orchestrator | 2026-04-05 01:11:58.888837 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-05 01:11:58.888854 | orchestrator | Sunday 05 April 2026 01:08:21 +0000 (0:00:00.588) 0:01:06.988 ********** 2026-04-05 01:11:58.888865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.888877 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.888888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.888906 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.888917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.888929 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.888940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.888958 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.888974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.888985 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.888997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.889008 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.889019 | orchestrator | 2026-04-05 01:11:58.889029 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-05 01:11:58.889040 | orchestrator | Sunday 05 April 2026 01:08:24 +0000 (0:00:02.878) 0:01:09.867 ********** 2026-04-05 01:11:58.889060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.889072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.889096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.889109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.889121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.889139 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.889157 | orchestrator | 2026-04-05 01:11:58.889169 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-05 01:11:58.889179 | orchestrator | Sunday 05 April 2026 01:08:27 +0000 (0:00:02.887) 0:01:12.754 ********** 2026-04-05 01:11:58.889191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.889208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.889220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.889239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.889257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.889269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.889280 | orchestrator | 2026-04-05 01:11:58.889296 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-05 01:11:58.889307 | orchestrator | Sunday 05 April 2026 01:08:35 +0000 (0:00:07.923) 0:01:20.678 ********** 2026-04-05 01:11:58.889318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.889330 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.889347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.889359 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.889378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.889390 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.889401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.889412 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.889428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.889473 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.889485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.889496 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.889507 | orchestrator | 2026-04-05 01:11:58.889518 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-05 01:11:58.889529 | orchestrator | Sunday 05 April 2026 01:08:37 +0000 (0:00:02.284) 0:01:22.963 ********** 2026-04-05 01:11:58.889540 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.889551 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:58.889569 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:11:58.889580 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:11:58.889592 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.889611 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.889630 | orchestrator | 2026-04-05 01:11:58.889648 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-05 01:11:58.889674 | orchestrator | Sunday 05 April 2026 01:08:41 +0000 (0:00:04.157) 0:01:27.121 ********** 2026-04-05 01:11:58.889694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.889713 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.889732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.889753 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.889787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.889808 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.889819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.889850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.889863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.889875 | orchestrator | 2026-04-05 01:11:58.889886 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-05 01:11:58.889897 | orchestrator | Sunday 05 April 2026 01:08:46 +0000 (0:00:04.609) 0:01:31.730 ********** 2026-04-05 01:11:58.889908 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.889919 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.889929 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.889940 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.889951 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.889962 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.889973 | orchestrator | 2026-04-05 01:11:58.889984 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-05 01:11:58.889995 | orchestrator | Sunday 05 April 2026 01:08:48 +0000 (0:00:02.492) 0:01:34.223 ********** 2026-04-05 01:11:58.890006 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.890052 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.890063 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.890074 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.890090 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.890101 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.890112 | orchestrator | 2026-04-05 01:11:58.890123 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-05 01:11:58.890134 | orchestrator | Sunday 05 April 2026 01:08:51 +0000 (0:00:02.230) 0:01:36.453 ********** 2026-04-05 01:11:58.890144 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.890155 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.890165 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.890176 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.890186 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.890197 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.890208 | orchestrator | 2026-04-05 01:11:58.890219 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-05 01:11:58.890237 | orchestrator | Sunday 05 April 2026 01:08:53 +0000 (0:00:02.169) 0:01:38.623 ********** 2026-04-05 01:11:58.890247 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.890258 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.890269 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.890280 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.890290 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.890301 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.890312 | orchestrator | 2026-04-05 01:11:58.890323 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-05 01:11:58.890334 | orchestrator | Sunday 05 April 2026 01:08:55 +0000 (0:00:02.083) 0:01:40.707 ********** 2026-04-05 01:11:58.890344 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.890355 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.890366 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.890377 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.890387 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.890398 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.890409 | orchestrator | 2026-04-05 01:11:58.890420 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-05 01:11:58.890431 | orchestrator | Sunday 05 April 2026 01:08:57 +0000 (0:00:02.028) 0:01:42.735 ********** 2026-04-05 01:11:58.890474 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:11:58.890487 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.890498 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:11:58.890509 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.890520 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:11:58.890530 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.890541 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:11:58.890552 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.890563 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:11:58.890574 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.890592 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:11:58.890604 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.890615 | orchestrator | 2026-04-05 01:11:58.890625 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-05 01:11:58.890639 | orchestrator | Sunday 05 April 2026 01:08:59 +0000 (0:00:01.762) 0:01:44.497 ********** 2026-04-05 01:11:58.890659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.890678 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.890705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.890735 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.890755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.890774 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.890803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.890822 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.890842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.890862 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.890880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.890910 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.890931 | orchestrator | 2026-04-05 01:11:58.890950 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-05 01:11:58.890968 | orchestrator | Sunday 05 April 2026 01:09:01 +0000 (0:00:02.001) 0:01:46.499 ********** 2026-04-05 01:11:58.890996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.891017 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.891036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.891055 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.891086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.891107 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.891126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.891159 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.891186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.891206 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.891226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.891245 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.891263 | orchestrator | 2026-04-05 01:11:58.891282 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-05 01:11:58.891300 | orchestrator | Sunday 05 April 2026 01:09:03 +0000 (0:00:02.454) 0:01:48.953 ********** 2026-04-05 01:11:58.891317 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.891329 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.891339 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.891350 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.891360 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.891371 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.891382 | orchestrator | 2026-04-05 01:11:58.891392 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-05 01:11:58.891403 | orchestrator | Sunday 05 April 2026 01:09:05 +0000 (0:00:02.069) 0:01:51.023 ********** 2026-04-05 01:11:58.891414 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.891424 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.891488 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.891501 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:11:58.891512 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:11:58.891523 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:11:58.891534 | orchestrator | 2026-04-05 01:11:58.891632 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-05 01:11:58.891644 | orchestrator | Sunday 05 April 2026 01:09:10 +0000 (0:00:05.155) 0:01:56.178 ********** 2026-04-05 01:11:58.891663 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.891673 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.891682 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.891692 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.891701 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.891711 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.891720 | orchestrator | 2026-04-05 01:11:58.891730 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-05 01:11:58.891739 | orchestrator | Sunday 05 April 2026 01:09:12 +0000 (0:00:01.862) 0:01:58.041 ********** 2026-04-05 01:11:58.891749 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.891758 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.891768 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.891777 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.891787 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.891796 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.891806 | orchestrator | 2026-04-05 01:11:58.891815 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-05 01:11:58.891825 | orchestrator | Sunday 05 April 2026 01:09:14 +0000 (0:00:01.833) 0:01:59.875 ********** 2026-04-05 01:11:58.891834 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.891844 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.891853 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.891863 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.891872 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.891881 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.891891 | orchestrator | 2026-04-05 01:11:58.891900 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-05 01:11:58.891910 | orchestrator | Sunday 05 April 2026 01:09:16 +0000 (0:00:02.397) 0:02:02.272 ********** 2026-04-05 01:11:58.891919 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.891929 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.891938 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.891947 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.891957 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.891967 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.891976 | orchestrator | 2026-04-05 01:11:58.891986 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-05 01:11:58.891995 | orchestrator | Sunday 05 April 2026 01:09:20 +0000 (0:00:03.709) 0:02:05.982 ********** 2026-04-05 01:11:58.892005 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.892014 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.892024 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.892033 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.892043 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.892052 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.892062 | orchestrator | 2026-04-05 01:11:58.892072 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-05 01:11:58.892087 | orchestrator | Sunday 05 April 2026 01:09:23 +0000 (0:00:02.617) 0:02:08.599 ********** 2026-04-05 01:11:58.892097 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.892106 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.892116 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.892125 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.892135 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.892144 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.892154 | orchestrator | 2026-04-05 01:11:58.892164 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-05 01:11:58.892174 | orchestrator | Sunday 05 April 2026 01:09:25 +0000 (0:00:02.492) 0:02:11.091 ********** 2026-04-05 01:11:58.892184 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.892193 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.892203 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.892221 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.892231 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.892240 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.892250 | orchestrator | 2026-04-05 01:11:58.892259 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-05 01:11:58.892269 | orchestrator | Sunday 05 April 2026 01:09:27 +0000 (0:00:01.929) 0:02:13.020 ********** 2026-04-05 01:11:58.892278 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:11:58.892288 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.892298 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:11:58.892308 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.892317 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:11:58.892327 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.892336 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:11:58.892346 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.892355 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:11:58.892365 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.892375 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:11:58.892384 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.892394 | orchestrator | 2026-04-05 01:11:58.892404 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-05 01:11:58.892413 | orchestrator | Sunday 05 April 2026 01:09:30 +0000 (0:00:02.650) 0:02:15.671 ********** 2026-04-05 01:11:58.892431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.892463 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.892474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.892485 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.892500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.892517 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.892528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.892538 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.892556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.892566 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.892576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.892586 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.892596 | orchestrator | 2026-04-05 01:11:58.892606 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-04-05 01:11:58.892615 | orchestrator | Sunday 05 April 2026 01:09:32 +0000 (0:00:02.639) 0:02:18.311 ********** 2026-04-05 01:11:58.892633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.892650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.892672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.892693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.892708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:11:58.892741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:11:58.892758 | orchestrator | 2026-04-05 01:11:58.892774 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-04-05 01:11:58.892789 | orchestrator | Sunday 05 April 2026 01:09:37 +0000 (0:00:04.261) 0:02:22.572 ********** 2026-04-05 01:11:58.892806 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:11:58.892823 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:11:58.892839 | orchestrator | } 2026-04-05 01:11:58.892855 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:11:58.892872 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:11:58.892888 | orchestrator | } 2026-04-05 01:11:58.892904 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:11:58.892921 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:11:58.892938 | orchestrator | } 2026-04-05 01:11:58.892955 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 01:11:58.892970 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:11:58.892988 | orchestrator | } 2026-04-05 01:11:58.893004 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 01:11:58.893019 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:11:58.893037 | orchestrator | } 2026-04-05 01:11:58.893053 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 01:11:58.893068 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:11:58.893082 | orchestrator | } 2026-04-05 01:11:58.893098 | orchestrator | 2026-04-05 01:11:58.893113 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:11:58.893130 | orchestrator | Sunday 05 April 2026 01:09:37 +0000 (0:00:00.670) 0:02:23.243 ********** 2026-04-05 01:11:58.893159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.893178 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.893195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.893224 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.893250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:11:58.893270 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.893288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.893304 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.893321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.893338 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.893365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:11:58.893393 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.893410 | orchestrator | 2026-04-05 01:11:58.893426 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 01:11:58.893511 | orchestrator | Sunday 05 April 2026 01:09:41 +0000 (0:00:03.527) 0:02:26.771 ********** 2026-04-05 01:11:58.893531 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:58.893547 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:58.893563 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:58.893580 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:11:58.893596 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:11:58.893613 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:11:58.893628 | orchestrator | 2026-04-05 01:11:58.893644 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-05 01:11:58.893660 | orchestrator | Sunday 05 April 2026 01:09:42 +0000 (0:00:00.805) 0:02:27.577 ********** 2026-04-05 01:11:58.893675 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:58.893688 | orchestrator | 2026-04-05 01:11:58.893701 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-05 01:11:58.893715 | orchestrator | Sunday 05 April 2026 01:09:44 +0000 (0:00:02.379) 0:02:29.956 ********** 2026-04-05 01:11:58.893729 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:58.893742 | orchestrator | 2026-04-05 01:11:58.893755 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-05 01:11:58.893769 | orchestrator | Sunday 05 April 2026 01:09:47 +0000 (0:00:02.681) 0:02:32.638 ********** 2026-04-05 01:11:58.893782 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:58.893796 | orchestrator | 2026-04-05 01:11:58.893810 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:11:58.893824 | orchestrator | Sunday 05 April 2026 01:10:31 +0000 (0:00:44.318) 0:03:16.956 ********** 2026-04-05 01:11:58.893838 | orchestrator | 2026-04-05 01:11:58.893852 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:11:58.893874 | orchestrator | Sunday 05 April 2026 01:10:31 +0000 (0:00:00.235) 0:03:17.192 ********** 2026-04-05 01:11:58.893890 | orchestrator | 2026-04-05 01:11:58.893904 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:11:58.893918 | orchestrator | Sunday 05 April 2026 01:10:32 +0000 (0:00:00.237) 0:03:17.430 ********** 2026-04-05 01:11:58.893932 | orchestrator | 2026-04-05 01:11:58.893945 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:11:58.893959 | orchestrator | Sunday 05 April 2026 01:10:32 +0000 (0:00:00.177) 0:03:17.608 ********** 2026-04-05 01:11:58.893973 | orchestrator | 2026-04-05 01:11:58.893987 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:11:58.894001 | orchestrator | Sunday 05 April 2026 01:10:32 +0000 (0:00:00.161) 0:03:17.769 ********** 2026-04-05 01:11:58.894047 | orchestrator | 2026-04-05 01:11:58.894066 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:11:58.894080 | orchestrator | Sunday 05 April 2026 01:10:32 +0000 (0:00:00.158) 0:03:17.928 ********** 2026-04-05 01:11:58.894094 | orchestrator | 2026-04-05 01:11:58.894107 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-05 01:11:58.894120 | orchestrator | Sunday 05 April 2026 01:10:32 +0000 (0:00:00.159) 0:03:18.088 ********** 2026-04-05 01:11:58.894133 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:58.894146 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:11:58.894160 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:11:58.894173 | orchestrator | 2026-04-05 01:11:58.894187 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-05 01:11:58.894213 | orchestrator | Sunday 05 April 2026 01:11:06 +0000 (0:00:34.093) 0:03:52.181 ********** 2026-04-05 01:11:58.894229 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:11:58.894243 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:11:58.894256 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:11:58.894269 | orchestrator | 2026-04-05 01:11:58.894284 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:11:58.894298 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 01:11:58.894313 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-05 01:11:58.894326 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-05 01:11:58.894341 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 01:11:58.894366 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 01:11:58.894382 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 01:11:58.894396 | orchestrator | 2026-04-05 01:11:58.894409 | orchestrator | 2026-04-05 01:11:58.894423 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:11:58.894455 | orchestrator | Sunday 05 April 2026 01:11:56 +0000 (0:00:49.812) 0:04:41.994 ********** 2026-04-05 01:11:58.894471 | orchestrator | =============================================================================== 2026-04-05 01:11:58.894485 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 49.81s 2026-04-05 01:11:58.894500 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.32s 2026-04-05 01:11:58.894513 | orchestrator | neutron : Restart neutron-server container ----------------------------- 34.09s 2026-04-05 01:11:58.894527 | orchestrator | service-ks-register : neutron | Granting/revoking user roles ------------ 8.28s 2026-04-05 01:11:58.894541 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.92s 2026-04-05 01:11:58.894554 | orchestrator | service-ks-register : neutron | Creating/deleting endpoints ------------- 7.12s 2026-04-05 01:11:58.894568 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.19s 2026-04-05 01:11:58.894582 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.16s 2026-04-05 01:11:58.894597 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.61s 2026-04-05 01:11:58.894610 | orchestrator | service-check-containers : neutron | Check containers ------------------- 4.26s 2026-04-05 01:11:58.894624 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.22s 2026-04-05 01:11:58.894637 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.16s 2026-04-05 01:11:58.894650 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.98s 2026-04-05 01:11:58.894663 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.87s 2026-04-05 01:11:58.894676 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.71s 2026-04-05 01:11:58.894691 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.68s 2026-04-05 01:11:58.894704 | orchestrator | service-ks-register : neutron | Creating/deleting services -------------- 3.63s 2026-04-05 01:11:58.894718 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.53s 2026-04-05 01:11:58.894731 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.50s 2026-04-05 01:11:58.894753 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.03s 2026-04-05 01:11:58.894773 | orchestrator | 2026-04-05 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:01.944919 | orchestrator | 2026-04-05 01:12:01 | INFO  | Task fa06b22c-159d-4d9f-8a90-4c744620184f is in state STARTED 2026-04-05 01:12:01.947146 | orchestrator | 2026-04-05 01:12:01 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:12:01.948975 | orchestrator | 2026-04-05 01:12:01 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:12:01.951336 | orchestrator | 2026-04-05 01:12:01 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:12:01.951878 | orchestrator | 2026-04-05 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:05.006746 | orchestrator | 2026-04-05 01:12:05 | INFO  | Task fa06b22c-159d-4d9f-8a90-4c744620184f is in state STARTED 2026-04-05 01:12:05.008995 | orchestrator | 2026-04-05 01:12:05 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:12:05.011212 | orchestrator | 2026-04-05 01:12:05 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state STARTED 2026-04-05 01:12:05.013816 | orchestrator | 2026-04-05 01:12:05 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state STARTED 2026-04-05 01:12:05.013876 | orchestrator | 2026-04-05 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:08.191083 | orchestrator | 2026-04-05 01:14:08 | INFO  | Task fa06b22c-159d-4d9f-8a90-4c744620184f is in state SUCCESS 2026-04-05 01:14:08.194791 | orchestrator | 2026-04-05 01:14:08.194862 | orchestrator | 2026-04-05 01:14:08.194876 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:14:08.194890 | orchestrator | 2026-04-05 01:14:08.194901 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:14:08.194913 | orchestrator | Sunday 05 April 2026 01:12:00 +0000 (0:00:00.391) 0:00:00.391 ********** 2026-04-05 01:14:08.194924 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:14:08.194937 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:14:08.194948 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:14:08.194959 | orchestrator | 2026-04-05 01:14:08.194971 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:14:08.194982 | orchestrator | Sunday 05 April 2026 01:12:00 +0000 (0:00:00.312) 0:00:00.704 ********** 2026-04-05 01:14:08.194994 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-05 01:14:08.195006 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-05 01:14:08.195017 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-05 01:14:08.195028 | orchestrator | 2026-04-05 01:14:08.195039 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-05 01:14:08.195050 | orchestrator | 2026-04-05 01:14:08.195460 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 01:14:08.195475 | orchestrator | Sunday 05 April 2026 01:12:01 +0000 (0:00:00.322) 0:00:01.027 ********** 2026-04-05 01:14:08.195511 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:14:08.195524 | orchestrator | 2026-04-05 01:14:08.195536 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-05 01:14:08.195547 | orchestrator | Sunday 05 April 2026 01:12:01 +0000 (0:00:00.746) 0:00:01.774 ********** 2026-04-05 01:14:08.195559 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-05 01:14:08.195570 | orchestrator | 2026-04-05 01:14:08.195582 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-04-05 01:14:08.195593 | orchestrator | Sunday 05 April 2026 01:12:06 +0000 (0:00:04.155) 0:00:05.929 ********** 2026-04-05 01:14:08.195604 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-05 01:14:08.195637 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-05 01:14:08.195648 | orchestrator | 2026-04-05 01:14:08.195659 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-05 01:14:08.195670 | orchestrator | Sunday 05 April 2026 01:12:13 +0000 (0:00:07.751) 0:00:13.681 ********** 2026-04-05 01:14:08.195681 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:14:08.195692 | orchestrator | 2026-04-05 01:14:08.195703 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-05 01:14:08.195714 | orchestrator | Sunday 05 April 2026 01:12:17 +0000 (0:00:03.861) 0:00:17.542 ********** 2026-04-05 01:14:08.195725 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-05 01:14:08.195736 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:14:08.195768 | orchestrator | 2026-04-05 01:14:08.195779 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-05 01:14:08.195790 | orchestrator | Sunday 05 April 2026 01:12:22 +0000 (0:00:04.437) 0:00:21.980 ********** 2026-04-05 01:14:08.195801 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:14:08.195812 | orchestrator | 2026-04-05 01:14:08.195823 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-04-05 01:14:08.195834 | orchestrator | Sunday 05 April 2026 01:12:25 +0000 (0:00:03.755) 0:00:25.735 ********** 2026-04-05 01:14:08.195845 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-05 01:14:08.195856 | orchestrator | 2026-04-05 01:14:08.195875 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 01:14:08.195886 | orchestrator | Sunday 05 April 2026 01:12:30 +0000 (0:00:04.909) 0:00:30.645 ********** 2026-04-05 01:14:08.195948 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.195960 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.195971 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.195983 | orchestrator | 2026-04-05 01:14:08.195994 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-05 01:14:08.196059 | orchestrator | Sunday 05 April 2026 01:12:31 +0000 (0:00:00.385) 0:00:31.030 ********** 2026-04-05 01:14:08.196114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.196133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.196157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.196170 | orchestrator | 2026-04-05 01:14:08.196181 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-05 01:14:08.196192 | orchestrator | Sunday 05 April 2026 01:12:33 +0000 (0:00:02.606) 0:00:33.636 ********** 2026-04-05 01:14:08.196203 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.196214 | orchestrator | 2026-04-05 01:14:08.196283 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-05 01:14:08.196298 | orchestrator | Sunday 05 April 2026 01:12:33 +0000 (0:00:00.118) 0:00:33.755 ********** 2026-04-05 01:14:08.196309 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.196320 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.196331 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.196342 | orchestrator | 2026-04-05 01:14:08.196711 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 01:14:08.196753 | orchestrator | Sunday 05 April 2026 01:12:34 +0000 (0:00:00.320) 0:00:34.075 ********** 2026-04-05 01:14:08.196775 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:14:08.196794 | orchestrator | 2026-04-05 01:14:08.196812 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-05 01:14:08.196830 | orchestrator | Sunday 05 April 2026 01:12:35 +0000 (0:00:00.756) 0:00:34.832 ********** 2026-04-05 01:14:08.196849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.196955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.197607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.197631 | orchestrator | 2026-04-05 01:14:08.197643 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-05 01:14:08.197654 | orchestrator | Sunday 05 April 2026 01:12:36 +0000 (0:00:01.665) 0:00:36.497 ********** 2026-04-05 01:14:08.197674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.197686 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.197933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.197970 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.197983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.197995 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.198005 | orchestrator | 2026-04-05 01:14:08.198071 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-05 01:14:08.198086 | orchestrator | Sunday 05 April 2026 01:12:37 +0000 (0:00:00.567) 0:00:37.064 ********** 2026-04-05 01:14:08.198097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.198109 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.198128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.198140 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.198240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.198257 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.198268 | orchestrator | 2026-04-05 01:14:08.198279 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-05 01:14:08.198290 | orchestrator | Sunday 05 April 2026 01:12:38 +0000 (0:00:00.765) 0:00:37.830 ********** 2026-04-05 01:14:08.198301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.198320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.198334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.198353 | orchestrator | 2026-04-05 01:14:08.198364 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-05 01:14:08.198376 | orchestrator | Sunday 05 April 2026 01:12:39 +0000 (0:00:01.829) 0:00:39.660 ********** 2026-04-05 01:14:08.198451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.198468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.198512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.198525 | orchestrator | 2026-04-05 01:14:08.198537 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-05 01:14:08.198581 | orchestrator | Sunday 05 April 2026 01:12:42 +0000 (0:00:02.936) 0:00:42.596 ********** 2026-04-05 01:14:08.198592 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-05 01:14:08.198604 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.198615 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-05 01:14:08.198626 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.198637 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-05 01:14:08.198648 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.198659 | orchestrator | 2026-04-05 01:14:08.198669 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-04-05 01:14:08.198680 | orchestrator | Sunday 05 April 2026 01:12:43 +0000 (0:00:00.487) 0:00:43.084 ********** 2026-04-05 01:14:08.198691 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:14:08.198702 | orchestrator | 2026-04-05 01:14:08.198713 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-04-05 01:14:08.198756 | orchestrator | Sunday 05 April 2026 01:12:45 +0000 (0:00:01.931) 0:00:45.015 ********** 2026-04-05 01:14:08.198768 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.198778 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:14:08.198789 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:14:08.198800 | orchestrator | 2026-04-05 01:14:08.198810 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-05 01:14:08.198821 | orchestrator | Sunday 05 April 2026 01:12:47 +0000 (0:00:01.824) 0:00:46.840 ********** 2026-04-05 01:14:08.198832 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.198843 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:14:08.198853 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:14:08.198864 | orchestrator | 2026-04-05 01:14:08.198875 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-05 01:14:08.198886 | orchestrator | Sunday 05 April 2026 01:12:48 +0000 (0:00:01.855) 0:00:48.696 ********** 2026-04-05 01:14:08.198898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.198910 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.198928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.198948 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.198959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.198971 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.198982 | orchestrator | 2026-04-05 01:14:08.198993 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-04-05 01:14:08.199004 | orchestrator | Sunday 05 April 2026 01:12:50 +0000 (0:00:01.117) 0:00:49.814 ********** 2026-04-05 01:14:08.199042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.199055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.199074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 01:14:08.199095 | orchestrator | 2026-04-05 01:14:08.199109 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-04-05 01:14:08.199122 | orchestrator | Sunday 05 April 2026 01:12:51 +0000 (0:00:01.509) 0:00:51.323 ********** 2026-04-05 01:14:08.199135 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:14:08.199148 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:14:08.199162 | orchestrator | } 2026-04-05 01:14:08.199181 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:14:08.199199 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:14:08.199218 | orchestrator | } 2026-04-05 01:14:08.199246 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:14:08.199267 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:14:08.199286 | orchestrator | } 2026-04-05 01:14:08.199304 | orchestrator | 2026-04-05 01:14:08.199323 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:14:08.199339 | orchestrator | Sunday 05 April 2026 01:12:51 +0000 (0:00:00.390) 0:00:51.714 ********** 2026-04-05 01:14:08.199408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.199433 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.199454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.199545 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.199566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 01:14:08.199578 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.199589 | orchestrator | 2026-04-05 01:14:08.199600 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-05 01:14:08.199611 | orchestrator | Sunday 05 April 2026 01:12:53 +0000 (0:00:01.327) 0:00:53.042 ********** 2026-04-05 01:14:08.199622 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.199633 | orchestrator | 2026-04-05 01:14:08.199644 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-05 01:14:08.199654 | orchestrator | Sunday 05 April 2026 01:12:55 +0000 (0:00:02.506) 0:00:55.548 ********** 2026-04-05 01:14:08.199665 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.199676 | orchestrator | 2026-04-05 01:14:08.199686 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-05 01:14:08.199697 | orchestrator | Sunday 05 April 2026 01:12:58 +0000 (0:00:02.577) 0:00:58.125 ********** 2026-04-05 01:14:08.199708 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.199719 | orchestrator | 2026-04-05 01:14:08.199729 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 01:14:08.199740 | orchestrator | Sunday 05 April 2026 01:13:13 +0000 (0:00:14.991) 0:01:13.117 ********** 2026-04-05 01:14:08.199751 | orchestrator | 2026-04-05 01:14:08.199761 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 01:14:08.199772 | orchestrator | Sunday 05 April 2026 01:13:13 +0000 (0:00:00.066) 0:01:13.183 ********** 2026-04-05 01:14:08.199783 | orchestrator | 2026-04-05 01:14:08.199794 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 01:14:08.199804 | orchestrator | Sunday 05 April 2026 01:13:13 +0000 (0:00:00.065) 0:01:13.248 ********** 2026-04-05 01:14:08.199815 | orchestrator | 2026-04-05 01:14:08.199826 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-05 01:14:08.199837 | orchestrator | Sunday 05 April 2026 01:13:13 +0000 (0:00:00.068) 0:01:13.317 ********** 2026-04-05 01:14:08.199848 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.199858 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:14:08.199869 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:14:08.199880 | orchestrator | 2026-04-05 01:14:08.199924 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:14:08.199938 | orchestrator | testbed-node-0 : ok=23  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-05 01:14:08.199951 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:14:08.199962 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:14:08.199980 | orchestrator | 2026-04-05 01:14:08.199991 | orchestrator | 2026-04-05 01:14:08.200002 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:14:08.200013 | orchestrator | Sunday 05 April 2026 01:13:21 +0000 (0:00:08.014) 0:01:21.331 ********** 2026-04-05 01:14:08.200024 | orchestrator | =============================================================================== 2026-04-05 01:14:08.200034 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.99s 2026-04-05 01:14:08.200045 | orchestrator | placement : Restart placement-api container ----------------------------- 8.01s 2026-04-05 01:14:08.200056 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 7.75s 2026-04-05 01:14:08.200067 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 4.91s 2026-04-05 01:14:08.200076 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.44s 2026-04-05 01:14:08.200086 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 4.16s 2026-04-05 01:14:08.200095 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.86s 2026-04-05 01:14:08.200105 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.76s 2026-04-05 01:14:08.200114 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.94s 2026-04-05 01:14:08.200124 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.61s 2026-04-05 01:14:08.200133 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.58s 2026-04-05 01:14:08.200143 | orchestrator | placement : Creating placement databases -------------------------------- 2.51s 2026-04-05 01:14:08.200152 | orchestrator | Configure uWSGI for Placement ------------------------------------------- 1.93s 2026-04-05 01:14:08.200162 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.86s 2026-04-05 01:14:08.200172 | orchestrator | placement : Copying over config.json files for services ----------------- 1.83s 2026-04-05 01:14:08.200181 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 1.82s 2026-04-05 01:14:08.200191 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.67s 2026-04-05 01:14:08.200201 | orchestrator | service-check-containers : placement | Check containers ----------------- 1.51s 2026-04-05 01:14:08.200210 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.33s 2026-04-05 01:14:08.200220 | orchestrator | placement : Copying over existing policy file --------------------------- 1.12s 2026-04-05 01:14:08.200229 | orchestrator | 2026-04-05 01:14:08 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:08.200240 | orchestrator | 2026-04-05 01:14:08 | INFO  | Task 319e4382-809c-4ca3-b7ca-dfc5f9781d77 is in state SUCCESS 2026-04-05 01:14:08.200250 | orchestrator | 2026-04-05 01:14:08.200259 | orchestrator | 2026-04-05 01:14:08.200273 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:14:08.200283 | orchestrator | 2026-04-05 01:14:08.200293 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:14:08.200302 | orchestrator | Sunday 05 April 2026 01:09:22 +0000 (0:00:00.974) 0:00:00.974 ********** 2026-04-05 01:14:08.200312 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:14:08.200322 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:14:08.200332 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:14:08.200341 | orchestrator | 2026-04-05 01:14:08.200351 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:14:08.200361 | orchestrator | Sunday 05 April 2026 01:09:22 +0000 (0:00:00.372) 0:00:01.347 ********** 2026-04-05 01:14:08.200370 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-05 01:14:08.200380 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-05 01:14:08.200390 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-05 01:14:08.200399 | orchestrator | 2026-04-05 01:14:08.200409 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-05 01:14:08.200428 | orchestrator | 2026-04-05 01:14:08.200438 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 01:14:08.200447 | orchestrator | Sunday 05 April 2026 01:09:23 +0000 (0:00:00.320) 0:00:01.667 ********** 2026-04-05 01:14:08.200457 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:14:08.200467 | orchestrator | 2026-04-05 01:14:08.200476 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-04-05 01:14:08.200530 | orchestrator | Sunday 05 April 2026 01:09:24 +0000 (0:00:00.987) 0:00:02.655 ********** 2026-04-05 01:14:08.200541 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-05 01:14:08.200550 | orchestrator | 2026-04-05 01:14:08.200560 | orchestrator | TASK [service-ks-register : designate | Creating/deleting endpoints] *********** 2026-04-05 01:14:08.200570 | orchestrator | Sunday 05 April 2026 01:09:28 +0000 (0:00:04.546) 0:00:07.201 ********** 2026-04-05 01:14:08.200766 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-05 01:14:08.200778 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-05 01:14:08.200786 | orchestrator | 2026-04-05 01:14:08.200794 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-05 01:14:08.200802 | orchestrator | Sunday 05 April 2026 01:09:36 +0000 (0:00:07.551) 0:00:14.753 ********** 2026-04-05 01:14:08.200809 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:14:08.200817 | orchestrator | 2026-04-05 01:14:08.200825 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-05 01:14:08.200833 | orchestrator | Sunday 05 April 2026 01:09:40 +0000 (0:00:03.731) 0:00:18.485 ********** 2026-04-05 01:14:08.200841 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-05 01:14:08.200849 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:14:08.200857 | orchestrator | 2026-04-05 01:14:08.200865 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-05 01:14:08.200872 | orchestrator | Sunday 05 April 2026 01:09:44 +0000 (0:00:04.439) 0:00:22.925 ********** 2026-04-05 01:14:08.200880 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:14:08.200888 | orchestrator | 2026-04-05 01:14:08.200896 | orchestrator | TASK [service-ks-register : designate | Granting/revoking user roles] ********** 2026-04-05 01:14:08.200904 | orchestrator | Sunday 05 April 2026 01:09:48 +0000 (0:00:03.884) 0:00:26.809 ********** 2026-04-05 01:14:08.200912 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-05 01:14:08.200920 | orchestrator | 2026-04-05 01:14:08.200927 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-05 01:14:08.200935 | orchestrator | Sunday 05 April 2026 01:09:52 +0000 (0:00:04.183) 0:00:30.993 ********** 2026-04-05 01:14:08.200945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.200960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.200977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.201021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201205 | orchestrator | 2026-04-05 01:14:08.201213 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-05 01:14:08.201221 | orchestrator | Sunday 05 April 2026 01:09:56 +0000 (0:00:03.789) 0:00:34.782 ********** 2026-04-05 01:14:08.201229 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.201237 | orchestrator | 2026-04-05 01:14:08.201245 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-05 01:14:08.201253 | orchestrator | Sunday 05 April 2026 01:09:56 +0000 (0:00:00.142) 0:00:34.925 ********** 2026-04-05 01:14:08.201261 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.201274 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.201283 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.201291 | orchestrator | 2026-04-05 01:14:08.201299 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 01:14:08.201307 | orchestrator | Sunday 05 April 2026 01:09:56 +0000 (0:00:00.288) 0:00:35.213 ********** 2026-04-05 01:14:08.201315 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:14:08.201323 | orchestrator | 2026-04-05 01:14:08.201330 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-05 01:14:08.201338 | orchestrator | Sunday 05 April 2026 01:09:57 +0000 (0:00:00.523) 0:00:35.737 ********** 2026-04-05 01:14:08.201350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.201360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.201390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.201402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.201640 | orchestrator | 2026-04-05 01:14:08.201649 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-05 01:14:08.201658 | orchestrator | Sunday 05 April 2026 01:10:04 +0000 (0:00:07.317) 0:00:43.054 ********** 2026-04-05 01:14:08.201673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.201684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.201717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.201736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.201760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.201770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.201824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201862 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.201870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201926 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.201934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.201951 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.201959 | orchestrator | 2026-04-05 01:14:08.201967 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-05 01:14:08.201975 | orchestrator | Sunday 05 April 2026 01:10:05 +0000 (0:00:01.185) 0:00:44.240 ********** 2026-04-05 01:14:08.201987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.202043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.202061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.202069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.202078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.202090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.202148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202237 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.202245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202253 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.202261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.202278 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.202286 | orchestrator | 2026-04-05 01:14:08.202293 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-05 01:14:08.202302 | orchestrator | Sunday 05 April 2026 01:10:07 +0000 (0:00:01.422) 0:00:45.662 ********** 2026-04-05 01:14:08.202314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.202348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.202358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.202366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202577 | orchestrator | 2026-04-05 01:14:08.202589 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-05 01:14:08.202608 | orchestrator | Sunday 05 April 2026 01:10:13 +0000 (0:00:05.791) 0:00:51.454 ********** 2026-04-05 01:14:08.202629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.202695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.202715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.202729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.202999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203029 | orchestrator | 2026-04-05 01:14:08.203042 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-05 01:14:08.203050 | orchestrator | Sunday 05 April 2026 01:10:30 +0000 (0:00:17.092) 0:01:08.547 ********** 2026-04-05 01:14:08.203058 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 01:14:08.203066 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 01:14:08.203074 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 01:14:08.203082 | orchestrator | 2026-04-05 01:14:08.203090 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-05 01:14:08.203098 | orchestrator | Sunday 05 April 2026 01:10:36 +0000 (0:00:06.344) 0:01:14.891 ********** 2026-04-05 01:14:08.203106 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 01:14:08.203113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 01:14:08.203121 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 01:14:08.203129 | orchestrator | 2026-04-05 01:14:08.203137 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-05 01:14:08.203144 | orchestrator | Sunday 05 April 2026 01:10:41 +0000 (0:00:04.937) 0:01:19.828 ********** 2026-04-05 01:14:08.203176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.203185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.203194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.203269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203405 | orchestrator | 2026-04-05 01:14:08.203413 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-05 01:14:08.203421 | orchestrator | Sunday 05 April 2026 01:10:46 +0000 (0:00:05.518) 0:01:25.346 ********** 2026-04-05 01:14:08.203436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.203445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.203460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.203472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.203692 | orchestrator | 2026-04-05 01:14:08.203700 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 01:14:08.203709 | orchestrator | Sunday 05 April 2026 01:10:50 +0000 (0:00:03.243) 0:01:28.590 ********** 2026-04-05 01:14:08.203717 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.203725 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.203734 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.203742 | orchestrator | 2026-04-05 01:14:08.203750 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-05 01:14:08.203763 | orchestrator | Sunday 05 April 2026 01:10:50 +0000 (0:00:00.357) 0:01:28.947 ********** 2026-04-05 01:14:08.203772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.203785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.203794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.203830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.203844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203882 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.203890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203913 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.203921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.203935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.203944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.203991 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.203999 | orchestrator | 2026-04-05 01:14:08.204007 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-04-05 01:14:08.204015 | orchestrator | Sunday 05 April 2026 01:10:51 +0000 (0:00:00.779) 0:01:29.726 ********** 2026-04-05 01:14:08.204024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.204033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.204045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:08.204054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:08.204201 | orchestrator | 2026-04-05 01:14:08.204208 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-04-05 01:14:08.204215 | orchestrator | Sunday 05 April 2026 01:10:56 +0000 (0:00:05.411) 0:01:35.138 ********** 2026-04-05 01:14:08.204222 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:14:08.204230 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:14:08.204237 | orchestrator | } 2026-04-05 01:14:08.204244 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:14:08.204251 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:14:08.204258 | orchestrator | } 2026-04-05 01:14:08.204265 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:14:08.204272 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:14:08.204278 | orchestrator | } 2026-04-05 01:14:08.204285 | orchestrator | 2026-04-05 01:14:08.204292 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:14:08.204299 | orchestrator | Sunday 05 April 2026 01:10:57 +0000 (0:00:00.958) 0:01:36.096 ********** 2026-04-05 01:14:08.204306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.204314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.204324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204362 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.204369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.204377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.204387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204437 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.204448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:08.204459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:14:08.204475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:14:08.204552 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.204564 | orchestrator | 2026-04-05 01:14:08.204574 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 01:14:08.204581 | orchestrator | Sunday 05 April 2026 01:10:59 +0000 (0:00:01.425) 0:01:37.522 ********** 2026-04-05 01:14:08.204587 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:08.204594 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:08.204601 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:08.204607 | orchestrator | 2026-04-05 01:14:08.204614 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-05 01:14:08.204621 | orchestrator | Sunday 05 April 2026 01:10:59 +0000 (0:00:00.591) 0:01:38.114 ********** 2026-04-05 01:14:08.204627 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-05 01:14:08.204634 | orchestrator | 2026-04-05 01:14:08.204640 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-05 01:14:08.204647 | orchestrator | Sunday 05 April 2026 01:11:02 +0000 (0:00:02.778) 0:01:40.893 ********** 2026-04-05 01:14:08.204653 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:14:08.204660 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-05 01:14:08.204667 | orchestrator | 2026-04-05 01:14:08.204673 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-05 01:14:08.204680 | orchestrator | Sunday 05 April 2026 01:11:05 +0000 (0:00:02.887) 0:01:43.781 ********** 2026-04-05 01:14:08.204686 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.204698 | orchestrator | 2026-04-05 01:14:08.204705 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 01:14:08.204711 | orchestrator | Sunday 05 April 2026 01:11:20 +0000 (0:00:14.731) 0:01:58.512 ********** 2026-04-05 01:14:08.204718 | orchestrator | 2026-04-05 01:14:08.204724 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 01:14:08.204731 | orchestrator | Sunday 05 April 2026 01:11:20 +0000 (0:00:00.094) 0:01:58.607 ********** 2026-04-05 01:14:08.204737 | orchestrator | 2026-04-05 01:14:08.204744 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 01:14:08.204751 | orchestrator | Sunday 05 April 2026 01:11:20 +0000 (0:00:00.110) 0:01:58.717 ********** 2026-04-05 01:14:08.204757 | orchestrator | 2026-04-05 01:14:08.204764 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-05 01:14:08.204770 | orchestrator | Sunday 05 April 2026 01:11:20 +0000 (0:00:00.110) 0:01:58.828 ********** 2026-04-05 01:14:08.204777 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:14:08.204783 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.204790 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:14:08.204797 | orchestrator | 2026-04-05 01:14:08.204808 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-05 01:14:08.204816 | orchestrator | Sunday 05 April 2026 01:11:33 +0000 (0:00:13.456) 0:02:12.284 ********** 2026-04-05 01:14:08.204827 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.204838 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:14:08.204849 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:14:08.204859 | orchestrator | 2026-04-05 01:14:08.204869 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-05 01:14:08.204879 | orchestrator | Sunday 05 April 2026 01:11:41 +0000 (0:00:07.295) 0:02:19.580 ********** 2026-04-05 01:14:08.204890 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.204901 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:14:08.204912 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:14:08.204924 | orchestrator | 2026-04-05 01:14:08.204935 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-05 01:14:08.204947 | orchestrator | Sunday 05 April 2026 01:11:46 +0000 (0:00:05.727) 0:02:25.308 ********** 2026-04-05 01:14:08.204954 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.204961 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:14:08.204967 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:14:08.204974 | orchestrator | 2026-04-05 01:14:08.204980 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-05 01:14:08.204987 | orchestrator | Sunday 05 April 2026 01:11:57 +0000 (0:00:10.808) 0:02:36.117 ********** 2026-04-05 01:14:08.204993 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.205000 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:14:08.205006 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:14:08.205013 | orchestrator | 2026-04-05 01:14:08.205020 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-05 01:14:08.205026 | orchestrator | Sunday 05 April 2026 01:12:03 +0000 (0:00:06.293) 0:02:42.411 ********** 2026-04-05 01:14:08.205033 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.205039 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:14:08.205046 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:14:08.205053 | orchestrator | 2026-04-05 01:14:08.205059 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-05 01:14:08.205071 | orchestrator | Sunday 05 April 2026 01:12:10 +0000 (0:00:06.666) 0:02:49.078 ********** 2026-04-05 01:14:08.205078 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:08.205085 | orchestrator | 2026-04-05 01:14:08.205091 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:14:08.205098 | orchestrator | testbed-node-0 : ok=30  changed=24  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-05 01:14:08.205111 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:14:08.205118 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:14:08.205125 | orchestrator | 2026-04-05 01:14:08.205132 | orchestrator | 2026-04-05 01:14:08.205138 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:14:08.205145 | orchestrator | Sunday 05 April 2026 01:12:19 +0000 (0:00:08.598) 0:02:57.677 ********** 2026-04-05 01:14:08.205152 | orchestrator | =============================================================================== 2026-04-05 01:14:08.205159 | orchestrator | designate : Copying over designate.conf -------------------------------- 17.09s 2026-04-05 01:14:08.205165 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.73s 2026-04-05 01:14:08.205172 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.46s 2026-04-05 01:14:08.205179 | orchestrator | designate : Restart designate-producer container ----------------------- 10.81s 2026-04-05 01:14:08.205186 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.60s 2026-04-05 01:14:08.205192 | orchestrator | service-ks-register : designate | Creating/deleting endpoints ----------- 7.55s 2026-04-05 01:14:08.205199 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.32s 2026-04-05 01:14:08.205206 | orchestrator | designate : Restart designate-api container ----------------------------- 7.30s 2026-04-05 01:14:08.205212 | orchestrator | designate : Restart designate-worker container -------------------------- 6.67s 2026-04-05 01:14:08.205219 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.34s 2026-04-05 01:14:08.205226 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.29s 2026-04-05 01:14:08.205232 | orchestrator | designate : Copying over config.json files for services ----------------- 5.79s 2026-04-05 01:14:08.205239 | orchestrator | designate : Restart designate-central container ------------------------- 5.73s 2026-04-05 01:14:08.205246 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 5.52s 2026-04-05 01:14:08.205253 | orchestrator | service-check-containers : designate | Check containers ----------------- 5.41s 2026-04-05 01:14:08.205259 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.94s 2026-04-05 01:14:08.205266 | orchestrator | service-ks-register : designate | Creating/deleting services ------------ 4.55s 2026-04-05 01:14:08.205273 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.44s 2026-04-05 01:14:08.205279 | orchestrator | service-ks-register : designate | Granting/revoking user roles ---------- 4.18s 2026-04-05 01:14:08.205286 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.88s 2026-04-05 01:14:08.205293 | orchestrator | 2026-04-05 01:14:08 | INFO  | Task 30f65df8-fe43-4454-81a2-baa12209b1d5 is in state SUCCESS 2026-04-05 01:14:08.205304 | orchestrator | 2026-04-05 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:11.246149 | orchestrator | 2026-04-05 01:14:11 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:11.247982 | orchestrator | 2026-04-05 01:14:11 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:11.250553 | orchestrator | 2026-04-05 01:14:11 | INFO  | Task 44c5ca05-fd1b-46c9-8b06-0877eea8dec2 is in state STARTED 2026-04-05 01:14:11.250587 | orchestrator | 2026-04-05 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:14.293527 | orchestrator | 2026-04-05 01:14:14 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:14.295457 | orchestrator | 2026-04-05 01:14:14 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:14.297804 | orchestrator | 2026-04-05 01:14:14 | INFO  | Task 44c5ca05-fd1b-46c9-8b06-0877eea8dec2 is in state STARTED 2026-04-05 01:14:14.297852 | orchestrator | 2026-04-05 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:17.345658 | orchestrator | 2026-04-05 01:14:17 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:17.345867 | orchestrator | 2026-04-05 01:14:17 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:17.346905 | orchestrator | 2026-04-05 01:14:17 | INFO  | Task 44c5ca05-fd1b-46c9-8b06-0877eea8dec2 is in state STARTED 2026-04-05 01:14:17.346947 | orchestrator | 2026-04-05 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:20.383688 | orchestrator | 2026-04-05 01:14:20 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:20.383785 | orchestrator | 2026-04-05 01:14:20 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:20.385437 | orchestrator | 2026-04-05 01:14:20 | INFO  | Task 44c5ca05-fd1b-46c9-8b06-0877eea8dec2 is in state STARTED 2026-04-05 01:14:20.385475 | orchestrator | 2026-04-05 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:23.443983 | orchestrator | 2026-04-05 01:14:23 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:23.445085 | orchestrator | 2026-04-05 01:14:23 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:23.447746 | orchestrator | 2026-04-05 01:14:23 | INFO  | Task 44c5ca05-fd1b-46c9-8b06-0877eea8dec2 is in state STARTED 2026-04-05 01:14:23.447841 | orchestrator | 2026-04-05 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:26.496039 | orchestrator | 2026-04-05 01:14:26 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:26.498267 | orchestrator | 2026-04-05 01:14:26 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:26.501093 | orchestrator | 2026-04-05 01:14:26 | INFO  | Task 44c5ca05-fd1b-46c9-8b06-0877eea8dec2 is in state STARTED 2026-04-05 01:14:26.501151 | orchestrator | 2026-04-05 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:29.544426 | orchestrator | 2026-04-05 01:14:29 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:29.545964 | orchestrator | 2026-04-05 01:14:29 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:29.548166 | orchestrator | 2026-04-05 01:14:29 | INFO  | Task 44c5ca05-fd1b-46c9-8b06-0877eea8dec2 is in state STARTED 2026-04-05 01:14:29.548211 | orchestrator | 2026-04-05 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:32.611774 | orchestrator | 2026-04-05 01:14:32 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:32.613288 | orchestrator | 2026-04-05 01:14:32 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:32.615276 | orchestrator | 2026-04-05 01:14:32 | INFO  | Task 44c5ca05-fd1b-46c9-8b06-0877eea8dec2 is in state SUCCESS 2026-04-05 01:14:32.615663 | orchestrator | 2026-04-05 01:14:32.615710 | orchestrator | 2026-04-05 01:14:32.615731 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-05 01:14:32.615751 | orchestrator | 2026-04-05 01:14:32.615770 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-05 01:14:32.615791 | orchestrator | Sunday 05 April 2026 01:11:00 +0000 (0:00:00.114) 0:00:00.114 ********** 2026-04-05 01:14:32.615811 | orchestrator | changed: [localhost] 2026-04-05 01:14:32.616015 | orchestrator | 2026-04-05 01:14:32.616045 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-05 01:14:32.616068 | orchestrator | Sunday 05 April 2026 01:11:01 +0000 (0:00:01.025) 0:00:01.140 ********** 2026-04-05 01:14:32.616110 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-04-05 01:14:32.616131 | orchestrator | changed: [localhost] 2026-04-05 01:14:32.616153 | orchestrator | 2026-04-05 01:14:32.616173 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-05 01:14:32.616194 | orchestrator | Sunday 05 April 2026 01:11:51 +0000 (0:00:50.525) 0:00:51.666 ********** 2026-04-05 01:14:32.616213 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-04-05 01:14:32.616232 | orchestrator | changed: [localhost] 2026-04-05 01:14:32.616243 | orchestrator | 2026-04-05 01:14:32.616254 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:14:32.616265 | orchestrator | 2026-04-05 01:14:32.616275 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:14:32.616286 | orchestrator | Sunday 05 April 2026 01:12:18 +0000 (0:00:26.322) 0:01:17.989 ********** 2026-04-05 01:14:32.616297 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:14:32.616307 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:14:32.616318 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:14:32.616329 | orchestrator | 2026-04-05 01:14:32.616339 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:14:32.616380 | orchestrator | Sunday 05 April 2026 01:12:18 +0000 (0:00:00.325) 0:01:18.315 ********** 2026-04-05 01:14:32.616397 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-05 01:14:32.616419 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-05 01:14:32.616439 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-05 01:14:32.616462 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-05 01:14:32.616484 | orchestrator | 2026-04-05 01:14:32.616838 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-05 01:14:32.616868 | orchestrator | skipping: no hosts matched 2026-04-05 01:14:32.616888 | orchestrator | 2026-04-05 01:14:32.616903 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:14:32.616922 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:14:32.616943 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:14:32.616964 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:14:32.616983 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:14:32.617002 | orchestrator | 2026-04-05 01:14:32.617021 | orchestrator | 2026-04-05 01:14:32.617040 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:14:32.617059 | orchestrator | Sunday 05 April 2026 01:12:19 +0000 (0:00:00.474) 0:01:18.790 ********** 2026-04-05 01:14:32.617078 | orchestrator | =============================================================================== 2026-04-05 01:14:32.617095 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 50.53s 2026-04-05 01:14:32.617106 | orchestrator | Download ironic-agent kernel ------------------------------------------- 26.32s 2026-04-05 01:14:32.617117 | orchestrator | Ensure the destination directory exists --------------------------------- 1.03s 2026-04-05 01:14:32.617128 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-04-05 01:14:32.617139 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-04-05 01:14:32.617154 | orchestrator | 2026-04-05 01:14:32.617184 | orchestrator | 2026-04-05 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:32.618476 | orchestrator | 2026-04-05 01:14:32.618610 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:14:32.618634 | orchestrator | 2026-04-05 01:14:32.618647 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:14:32.618658 | orchestrator | Sunday 05 April 2026 01:12:23 +0000 (0:00:00.335) 0:00:00.335 ********** 2026-04-05 01:14:32.618669 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:14:32.618680 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:14:32.618691 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:14:32.618701 | orchestrator | 2026-04-05 01:14:32.618712 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:14:32.618723 | orchestrator | Sunday 05 April 2026 01:12:23 +0000 (0:00:00.300) 0:00:00.635 ********** 2026-04-05 01:14:32.618734 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-05 01:14:32.618744 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-05 01:14:32.618753 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-05 01:14:32.618763 | orchestrator | 2026-04-05 01:14:32.618773 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-05 01:14:32.618782 | orchestrator | 2026-04-05 01:14:32.618792 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 01:14:32.618801 | orchestrator | Sunday 05 April 2026 01:12:23 +0000 (0:00:00.344) 0:00:00.980 ********** 2026-04-05 01:14:32.618811 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:14:32.618821 | orchestrator | 2026-04-05 01:14:32.618831 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-04-05 01:14:32.618840 | orchestrator | Sunday 05 April 2026 01:12:24 +0000 (0:00:00.695) 0:00:01.676 ********** 2026-04-05 01:14:32.618850 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-05 01:14:32.618860 | orchestrator | 2026-04-05 01:14:32.618869 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting endpoints] ************** 2026-04-05 01:14:32.618879 | orchestrator | Sunday 05 April 2026 01:12:29 +0000 (0:00:04.657) 0:00:06.333 ********** 2026-04-05 01:14:32.618900 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-05 01:14:32.618910 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-05 01:14:32.618920 | orchestrator | 2026-04-05 01:14:32.618930 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-05 01:14:32.618939 | orchestrator | Sunday 05 April 2026 01:12:36 +0000 (0:00:07.381) 0:00:13.715 ********** 2026-04-05 01:14:32.618949 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:14:32.618958 | orchestrator | 2026-04-05 01:14:32.618968 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-05 01:14:32.618978 | orchestrator | Sunday 05 April 2026 01:12:40 +0000 (0:00:03.773) 0:00:17.488 ********** 2026-04-05 01:14:32.618987 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-05 01:14:32.618997 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:14:32.619006 | orchestrator | 2026-04-05 01:14:32.619016 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-05 01:14:32.619025 | orchestrator | Sunday 05 April 2026 01:12:44 +0000 (0:00:04.485) 0:00:21.973 ********** 2026-04-05 01:14:32.619035 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:14:32.619044 | orchestrator | 2026-04-05 01:14:32.619054 | orchestrator | TASK [service-ks-register : magnum | Granting/revoking user roles] ************* 2026-04-05 01:14:32.619063 | orchestrator | Sunday 05 April 2026 01:12:48 +0000 (0:00:03.990) 0:00:25.964 ********** 2026-04-05 01:14:32.619073 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-05 01:14:32.619082 | orchestrator | 2026-04-05 01:14:32.619092 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-05 01:14:32.619114 | orchestrator | Sunday 05 April 2026 01:12:53 +0000 (0:00:04.397) 0:00:30.362 ********** 2026-04-05 01:14:32.619123 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:32.619133 | orchestrator | 2026-04-05 01:14:32.619143 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-05 01:14:32.619152 | orchestrator | Sunday 05 April 2026 01:12:56 +0000 (0:00:03.588) 0:00:33.951 ********** 2026-04-05 01:14:32.619162 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:32.619171 | orchestrator | 2026-04-05 01:14:32.619181 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-05 01:14:32.619190 | orchestrator | Sunday 05 April 2026 01:13:01 +0000 (0:00:04.512) 0:00:38.463 ********** 2026-04-05 01:14:32.619200 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:32.619209 | orchestrator | 2026-04-05 01:14:32.619218 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-05 01:14:32.619228 | orchestrator | Sunday 05 April 2026 01:13:05 +0000 (0:00:04.221) 0:00:42.685 ********** 2026-04-05 01:14:32.619258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.619275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.619291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.619303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.619323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.619343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.619354 | orchestrator | 2026-04-05 01:14:32.619364 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-05 01:14:32.619374 | orchestrator | Sunday 05 April 2026 01:13:07 +0000 (0:00:01.711) 0:00:44.397 ********** 2026-04-05 01:14:32.619384 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:32.619393 | orchestrator | 2026-04-05 01:14:32.619403 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-05 01:14:32.619413 | orchestrator | Sunday 05 April 2026 01:13:07 +0000 (0:00:00.133) 0:00:44.530 ********** 2026-04-05 01:14:32.619422 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:32.619432 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:32.619441 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:32.619451 | orchestrator | 2026-04-05 01:14:32.619460 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-05 01:14:32.619470 | orchestrator | Sunday 05 April 2026 01:13:07 +0000 (0:00:00.291) 0:00:44.821 ********** 2026-04-05 01:14:32.619479 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:14:32.619489 | orchestrator | 2026-04-05 01:14:32.619578 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-05 01:14:32.619589 | orchestrator | Sunday 05 April 2026 01:13:08 +0000 (0:00:00.917) 0:00:45.739 ********** 2026-04-05 01:14:32.619604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.619623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.619643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.619654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.619670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.619688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.619698 | orchestrator | 2026-04-05 01:14:32.619708 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-05 01:14:32.619718 | orchestrator | Sunday 05 April 2026 01:13:10 +0000 (0:00:02.518) 0:00:48.257 ********** 2026-04-05 01:14:32.619728 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:14:32.619737 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:14:32.619747 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:14:32.619757 | orchestrator | 2026-04-05 01:14:32.619766 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 01:14:32.619776 | orchestrator | Sunday 05 April 2026 01:13:11 +0000 (0:00:00.522) 0:00:48.780 ********** 2026-04-05 01:14:32.619786 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:14:32.619795 | orchestrator | 2026-04-05 01:14:32.619805 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-05 01:14:32.619814 | orchestrator | Sunday 05 April 2026 01:13:11 +0000 (0:00:00.511) 0:00:49.291 ********** 2026-04-05 01:14:32.619831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.619843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.619865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.619876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.619887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.619903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.619914 | orchestrator | 2026-04-05 01:14:32.619924 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-05 01:14:32.619933 | orchestrator | Sunday 05 April 2026 01:13:14 +0000 (0:00:02.772) 0:00:52.064 ********** 2026-04-05 01:14:32.619944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.619971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.619982 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:32.619992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.620010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.620021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.620037 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:32.620052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.620063 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:32.620072 | orchestrator | 2026-04-05 01:14:32.620082 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-05 01:14:32.620092 | orchestrator | Sunday 05 April 2026 01:13:16 +0000 (0:00:02.066) 0:00:54.130 ********** 2026-04-05 01:14:32.620102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.620113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.620123 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:32.620141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.620158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.620168 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:32.620183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.620194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.620204 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:32.620213 | orchestrator | 2026-04-05 01:14:32.620223 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-05 01:14:32.620233 | orchestrator | Sunday 05 April 2026 01:13:18 +0000 (0:00:01.441) 0:00:55.572 ********** 2026-04-05 01:14:32.620249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.620266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.620281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.620292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.620303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.620320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.620336 | orchestrator | 2026-04-05 01:14:32.620369 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-05 01:14:32.620380 | orchestrator | Sunday 05 April 2026 01:13:20 +0000 (0:00:02.682) 0:00:58.255 ********** 2026-04-05 01:14:32.620396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.620407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.620419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.620435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.620452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.620466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.620477 | orchestrator | 2026-04-05 01:14:32.620487 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-05 01:14:32.620542 | orchestrator | Sunday 05 April 2026 01:13:27 +0000 (0:00:06.820) 0:01:05.075 ********** 2026-04-05 01:14:32.620554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.620565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.620582 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:32.620599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.620610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.620626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.620637 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:32.620647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.620657 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:32.620667 | orchestrator | 2026-04-05 01:14:32.620677 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-04-05 01:14:32.620687 | orchestrator | Sunday 05 April 2026 01:13:29 +0000 (0:00:01.233) 0:01:06.309 ********** 2026-04-05 01:14:32.620702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.620719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.620734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:14:32.620745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.620755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.620780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:14:32.620790 | orchestrator | 2026-04-05 01:14:32.620800 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-04-05 01:14:32.620810 | orchestrator | Sunday 05 April 2026 01:13:31 +0000 (0:00:02.474) 0:01:08.783 ********** 2026-04-05 01:14:32.620819 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:14:32.620829 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:14:32.620839 | orchestrator | } 2026-04-05 01:14:32.620849 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:14:32.620859 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:14:32.620868 | orchestrator | } 2026-04-05 01:14:32.620878 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:14:32.620887 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:14:32.620897 | orchestrator | } 2026-04-05 01:14:32.620906 | orchestrator | 2026-04-05 01:14:32.620914 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:14:32.620922 | orchestrator | Sunday 05 April 2026 01:13:31 +0000 (0:00:00.397) 0:01:09.181 ********** 2026-04-05 01:14:32.620934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.620944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.620957 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:32.620965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.620980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.620989 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:32.620997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:14:32.621010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:14:32.621019 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:32.621027 | orchestrator | 2026-04-05 01:14:32.621035 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 01:14:32.621043 | orchestrator | Sunday 05 April 2026 01:13:34 +0000 (0:00:02.742) 0:01:11.923 ********** 2026-04-05 01:14:32.621056 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:14:32.621064 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:14:32.621072 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:14:32.621080 | orchestrator | 2026-04-05 01:14:32.621088 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-05 01:14:32.621096 | orchestrator | Sunday 05 April 2026 01:13:35 +0000 (0:00:00.613) 0:01:12.537 ********** 2026-04-05 01:14:32.621103 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:32.621111 | orchestrator | 2026-04-05 01:14:32.621119 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-05 01:14:32.621127 | orchestrator | Sunday 05 April 2026 01:13:37 +0000 (0:00:02.713) 0:01:15.251 ********** 2026-04-05 01:14:32.621135 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:32.621143 | orchestrator | 2026-04-05 01:14:32.621150 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-05 01:14:32.621158 | orchestrator | Sunday 05 April 2026 01:13:40 +0000 (0:00:02.363) 0:01:17.615 ********** 2026-04-05 01:14:32.621166 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:32.621174 | orchestrator | 2026-04-05 01:14:32.621182 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 01:14:32.621189 | orchestrator | Sunday 05 April 2026 01:13:58 +0000 (0:00:18.404) 0:01:36.019 ********** 2026-04-05 01:14:32.621197 | orchestrator | 2026-04-05 01:14:32.621205 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 01:14:32.621213 | orchestrator | Sunday 05 April 2026 01:13:58 +0000 (0:00:00.064) 0:01:36.084 ********** 2026-04-05 01:14:32.621221 | orchestrator | 2026-04-05 01:14:32.621229 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 01:14:32.621237 | orchestrator | Sunday 05 April 2026 01:13:58 +0000 (0:00:00.067) 0:01:36.152 ********** 2026-04-05 01:14:32.621244 | orchestrator | 2026-04-05 01:14:32.621252 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-05 01:14:32.621260 | orchestrator | Sunday 05 April 2026 01:13:58 +0000 (0:00:00.068) 0:01:36.221 ********** 2026-04-05 01:14:32.621268 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:32.621276 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:14:32.621284 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:14:32.621291 | orchestrator | 2026-04-05 01:14:32.621299 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-05 01:14:32.621307 | orchestrator | Sunday 05 April 2026 01:14:20 +0000 (0:00:21.132) 0:01:57.353 ********** 2026-04-05 01:14:32.621315 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:14:32.621327 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:14:32.621335 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:14:32.621343 | orchestrator | 2026-04-05 01:14:32.621351 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:14:32.621359 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:14:32.621368 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 01:14:32.621376 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 01:14:32.621383 | orchestrator | 2026-04-05 01:14:32.621391 | orchestrator | 2026-04-05 01:14:32.621399 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:14:32.621407 | orchestrator | Sunday 05 April 2026 01:14:30 +0000 (0:00:10.438) 0:02:07.791 ********** 2026-04-05 01:14:32.621415 | orchestrator | =============================================================================== 2026-04-05 01:14:32.621422 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.13s 2026-04-05 01:14:32.621430 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.40s 2026-04-05 01:14:32.621443 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.44s 2026-04-05 01:14:32.621451 | orchestrator | service-ks-register : magnum | Creating/deleting endpoints -------------- 7.38s 2026-04-05 01:14:32.621459 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.82s 2026-04-05 01:14:32.621466 | orchestrator | service-ks-register : magnum | Creating/deleting services --------------- 4.66s 2026-04-05 01:14:32.621474 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.51s 2026-04-05 01:14:32.621486 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.49s 2026-04-05 01:14:32.621508 | orchestrator | service-ks-register : magnum | Granting/revoking user roles ------------- 4.40s 2026-04-05 01:14:32.621516 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.22s 2026-04-05 01:14:32.621524 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.99s 2026-04-05 01:14:32.621532 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.77s 2026-04-05 01:14:32.621540 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.59s 2026-04-05 01:14:32.621547 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.77s 2026-04-05 01:14:32.621555 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.74s 2026-04-05 01:14:32.621563 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.71s 2026-04-05 01:14:32.621571 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.68s 2026-04-05 01:14:32.621579 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.52s 2026-04-05 01:14:32.621587 | orchestrator | service-check-containers : magnum | Check containers -------------------- 2.47s 2026-04-05 01:14:32.621595 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.36s 2026-04-05 01:14:35.664155 | orchestrator | 2026-04-05 01:14:35 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:35.665812 | orchestrator | 2026-04-05 01:14:35 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:35.665855 | orchestrator | 2026-04-05 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:38.713905 | orchestrator | 2026-04-05 01:14:38 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:38.715645 | orchestrator | 2026-04-05 01:14:38 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:38.715696 | orchestrator | 2026-04-05 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:41.765968 | orchestrator | 2026-04-05 01:14:41 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:41.767741 | orchestrator | 2026-04-05 01:14:41 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:41.767788 | orchestrator | 2026-04-05 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:44.822996 | orchestrator | 2026-04-05 01:14:44 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:44.826917 | orchestrator | 2026-04-05 01:14:44 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:44.827664 | orchestrator | 2026-04-05 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:47.874676 | orchestrator | 2026-04-05 01:14:47 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:47.875642 | orchestrator | 2026-04-05 01:14:47 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:47.875705 | orchestrator | 2026-04-05 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:50.913957 | orchestrator | 2026-04-05 01:14:50 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:50.914121 | orchestrator | 2026-04-05 01:14:50 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:50.915718 | orchestrator | 2026-04-05 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:53.974584 | orchestrator | 2026-04-05 01:14:53 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:53.977117 | orchestrator | 2026-04-05 01:14:53 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:53.977268 | orchestrator | 2026-04-05 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:57.022586 | orchestrator | 2026-04-05 01:14:57 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:14:57.025549 | orchestrator | 2026-04-05 01:14:57 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:14:57.025601 | orchestrator | 2026-04-05 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:00.076873 | orchestrator | 2026-04-05 01:15:00 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:00.079114 | orchestrator | 2026-04-05 01:15:00 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:00.079159 | orchestrator | 2026-04-05 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:03.118335 | orchestrator | 2026-04-05 01:15:03 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:03.119830 | orchestrator | 2026-04-05 01:15:03 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:03.119890 | orchestrator | 2026-04-05 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:06.163599 | orchestrator | 2026-04-05 01:15:06 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:06.166674 | orchestrator | 2026-04-05 01:15:06 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:06.166727 | orchestrator | 2026-04-05 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:09.208406 | orchestrator | 2026-04-05 01:15:09 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:09.210643 | orchestrator | 2026-04-05 01:15:09 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:09.210767 | orchestrator | 2026-04-05 01:15:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:12.249937 | orchestrator | 2026-04-05 01:15:12 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:12.250834 | orchestrator | 2026-04-05 01:15:12 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:12.250860 | orchestrator | 2026-04-05 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:15.314883 | orchestrator | 2026-04-05 01:15:15 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:15.315772 | orchestrator | 2026-04-05 01:15:15 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:15.316316 | orchestrator | 2026-04-05 01:15:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:18.363086 | orchestrator | 2026-04-05 01:15:18 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:18.365472 | orchestrator | 2026-04-05 01:15:18 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:18.365613 | orchestrator | 2026-04-05 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:21.420509 | orchestrator | 2026-04-05 01:15:21 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:21.422210 | orchestrator | 2026-04-05 01:15:21 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:21.422239 | orchestrator | 2026-04-05 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:24.482512 | orchestrator | 2026-04-05 01:15:24 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:24.484477 | orchestrator | 2026-04-05 01:15:24 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:24.484781 | orchestrator | 2026-04-05 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:27.535783 | orchestrator | 2026-04-05 01:15:27 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:27.537728 | orchestrator | 2026-04-05 01:15:27 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:27.537802 | orchestrator | 2026-04-05 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:30.592089 | orchestrator | 2026-04-05 01:15:30 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:30.594536 | orchestrator | 2026-04-05 01:15:30 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:30.594591 | orchestrator | 2026-04-05 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:33.646857 | orchestrator | 2026-04-05 01:15:33 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:33.648754 | orchestrator | 2026-04-05 01:15:33 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:33.648822 | orchestrator | 2026-04-05 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:36.691242 | orchestrator | 2026-04-05 01:15:36 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:36.693362 | orchestrator | 2026-04-05 01:15:36 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:36.693440 | orchestrator | 2026-04-05 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:39.745969 | orchestrator | 2026-04-05 01:15:39 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:39.748279 | orchestrator | 2026-04-05 01:15:39 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:39.749429 | orchestrator | 2026-04-05 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:42.791107 | orchestrator | 2026-04-05 01:15:42 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:42.791228 | orchestrator | 2026-04-05 01:15:42 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:42.791251 | orchestrator | 2026-04-05 01:15:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:45.836125 | orchestrator | 2026-04-05 01:15:45 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:45.837497 | orchestrator | 2026-04-05 01:15:45 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:45.840176 | orchestrator | 2026-04-05 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:48.878969 | orchestrator | 2026-04-05 01:15:48 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:48.880318 | orchestrator | 2026-04-05 01:15:48 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:48.880354 | orchestrator | 2026-04-05 01:15:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:51.938372 | orchestrator | 2026-04-05 01:15:51 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:51.942516 | orchestrator | 2026-04-05 01:15:51 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:51.942661 | orchestrator | 2026-04-05 01:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:54.986448 | orchestrator | 2026-04-05 01:15:54 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:54.987150 | orchestrator | 2026-04-05 01:15:54 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:54.987182 | orchestrator | 2026-04-05 01:15:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:58.042175 | orchestrator | 2026-04-05 01:15:58 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:15:58.043857 | orchestrator | 2026-04-05 01:15:58 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:15:58.043942 | orchestrator | 2026-04-05 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:01.092751 | orchestrator | 2026-04-05 01:16:01 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:01.094709 | orchestrator | 2026-04-05 01:16:01 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:01.094765 | orchestrator | 2026-04-05 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:04.140733 | orchestrator | 2026-04-05 01:16:04 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:04.143367 | orchestrator | 2026-04-05 01:16:04 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:04.143739 | orchestrator | 2026-04-05 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:07.181683 | orchestrator | 2026-04-05 01:16:07 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:07.183795 | orchestrator | 2026-04-05 01:16:07 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:07.183850 | orchestrator | 2026-04-05 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:10.224756 | orchestrator | 2026-04-05 01:16:10 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:10.227914 | orchestrator | 2026-04-05 01:16:10 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:10.227994 | orchestrator | 2026-04-05 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:13.278974 | orchestrator | 2026-04-05 01:16:13 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:13.280453 | orchestrator | 2026-04-05 01:16:13 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:13.280542 | orchestrator | 2026-04-05 01:16:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:16.326810 | orchestrator | 2026-04-05 01:16:16 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:16.328219 | orchestrator | 2026-04-05 01:16:16 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:16.328264 | orchestrator | 2026-04-05 01:16:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:19.375463 | orchestrator | 2026-04-05 01:16:19 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:19.378224 | orchestrator | 2026-04-05 01:16:19 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:19.378387 | orchestrator | 2026-04-05 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:22.425089 | orchestrator | 2026-04-05 01:16:22 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:22.425727 | orchestrator | 2026-04-05 01:16:22 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:22.425766 | orchestrator | 2026-04-05 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:25.471139 | orchestrator | 2026-04-05 01:16:25 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:25.473084 | orchestrator | 2026-04-05 01:16:25 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:25.473129 | orchestrator | 2026-04-05 01:16:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:28.510414 | orchestrator | 2026-04-05 01:16:28 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:28.510524 | orchestrator | 2026-04-05 01:16:28 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:28.510540 | orchestrator | 2026-04-05 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:31.542136 | orchestrator | 2026-04-05 01:16:31 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:31.542358 | orchestrator | 2026-04-05 01:16:31 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:31.542384 | orchestrator | 2026-04-05 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:34.574882 | orchestrator | 2026-04-05 01:16:34 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:34.574983 | orchestrator | 2026-04-05 01:16:34 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:34.575013 | orchestrator | 2026-04-05 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:37.609257 | orchestrator | 2026-04-05 01:16:37 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:37.610231 | orchestrator | 2026-04-05 01:16:37 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:37.610256 | orchestrator | 2026-04-05 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:40.639171 | orchestrator | 2026-04-05 01:16:40 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:40.639253 | orchestrator | 2026-04-05 01:16:40 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:40.639263 | orchestrator | 2026-04-05 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:43.674985 | orchestrator | 2026-04-05 01:16:43 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:43.677030 | orchestrator | 2026-04-05 01:16:43 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:43.677115 | orchestrator | 2026-04-05 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:46.717765 | orchestrator | 2026-04-05 01:16:46 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:46.721790 | orchestrator | 2026-04-05 01:16:46 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:46.721841 | orchestrator | 2026-04-05 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:49.768997 | orchestrator | 2026-04-05 01:16:49 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:49.772016 | orchestrator | 2026-04-05 01:16:49 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:49.772080 | orchestrator | 2026-04-05 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:52.817902 | orchestrator | 2026-04-05 01:16:52 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:52.818439 | orchestrator | 2026-04-05 01:16:52 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:52.818511 | orchestrator | 2026-04-05 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:55.866669 | orchestrator | 2026-04-05 01:16:55 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:55.870930 | orchestrator | 2026-04-05 01:16:55 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:55.871020 | orchestrator | 2026-04-05 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:58.919420 | orchestrator | 2026-04-05 01:16:58 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:16:58.919771 | orchestrator | 2026-04-05 01:16:58 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:16:58.919791 | orchestrator | 2026-04-05 01:16:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:01.966815 | orchestrator | 2026-04-05 01:17:01 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:17:01.967968 | orchestrator | 2026-04-05 01:17:01 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:01.968015 | orchestrator | 2026-04-05 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:05.011845 | orchestrator | 2026-04-05 01:17:05 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:17:05.013818 | orchestrator | 2026-04-05 01:17:05 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:05.013865 | orchestrator | 2026-04-05 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:08.061610 | orchestrator | 2026-04-05 01:17:08 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:17:08.064767 | orchestrator | 2026-04-05 01:17:08 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:08.064857 | orchestrator | 2026-04-05 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:11.107418 | orchestrator | 2026-04-05 01:17:11 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:17:11.109460 | orchestrator | 2026-04-05 01:17:11 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:11.109518 | orchestrator | 2026-04-05 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:14.144942 | orchestrator | 2026-04-05 01:17:14 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:17:14.145044 | orchestrator | 2026-04-05 01:17:14 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:14.145060 | orchestrator | 2026-04-05 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:17.175737 | orchestrator | 2026-04-05 01:17:17 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state STARTED 2026-04-05 01:17:17.176105 | orchestrator | 2026-04-05 01:17:17 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:17.176132 | orchestrator | 2026-04-05 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:20.233362 | orchestrator | 2026-04-05 01:17:20 | INFO  | Task f84cf7f6-6338-408e-9a11-f6188f0fc692 is in state SUCCESS 2026-04-05 01:17:20.235836 | orchestrator | 2026-04-05 01:17:20.235909 | orchestrator | 2026-04-05 01:17:20.235924 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:17:20.235938 | orchestrator | 2026-04-05 01:17:20.235950 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-05 01:17:20.235962 | orchestrator | Sunday 05 April 2026 01:06:08 +0000 (0:00:00.362) 0:00:00.362 ********** 2026-04-05 01:17:20.235974 | orchestrator | changed: [testbed-manager] 2026-04-05 01:17:20.235987 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.235998 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:17:20.236010 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:17:20.236022 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.236034 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.236045 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.236055 | orchestrator | 2026-04-05 01:17:20.236066 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:17:20.236077 | orchestrator | Sunday 05 April 2026 01:06:09 +0000 (0:00:00.843) 0:00:01.205 ********** 2026-04-05 01:17:20.236088 | orchestrator | changed: [testbed-manager] 2026-04-05 01:17:20.236101 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.236112 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:17:20.236125 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:17:20.236135 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.236147 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.236159 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.236172 | orchestrator | 2026-04-05 01:17:20.236182 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:17:20.236194 | orchestrator | Sunday 05 April 2026 01:06:10 +0000 (0:00:00.816) 0:00:02.022 ********** 2026-04-05 01:17:20.236206 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-05 01:17:20.236218 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-05 01:17:20.236251 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-05 01:17:20.236263 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-05 01:17:20.236275 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-05 01:17:20.236286 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-05 01:17:20.236298 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-05 01:17:20.236310 | orchestrator | 2026-04-05 01:17:20.236321 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-05 01:17:20.236936 | orchestrator | 2026-04-05 01:17:20.236950 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-05 01:17:20.236962 | orchestrator | Sunday 05 April 2026 01:06:10 +0000 (0:00:00.799) 0:00:02.822 ********** 2026-04-05 01:17:20.236974 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:17:20.236985 | orchestrator | 2026-04-05 01:17:20.236996 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-05 01:17:20.237007 | orchestrator | Sunday 05 April 2026 01:06:11 +0000 (0:00:00.757) 0:00:03.579 ********** 2026-04-05 01:17:20.237019 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-05 01:17:20.237031 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-05 01:17:20.237042 | orchestrator | 2026-04-05 01:17:20.237054 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-05 01:17:20.237065 | orchestrator | Sunday 05 April 2026 01:06:16 +0000 (0:00:05.055) 0:00:08.635 ********** 2026-04-05 01:17:20.237100 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:17:20.237108 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:17:20.237114 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.237120 | orchestrator | 2026-04-05 01:17:20.237127 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-05 01:17:20.237133 | orchestrator | Sunday 05 April 2026 01:06:21 +0000 (0:00:04.509) 0:00:13.144 ********** 2026-04-05 01:17:20.237139 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.237146 | orchestrator | 2026-04-05 01:17:20.237152 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-05 01:17:20.237158 | orchestrator | Sunday 05 April 2026 01:06:22 +0000 (0:00:00.828) 0:00:13.972 ********** 2026-04-05 01:17:20.237165 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.237171 | orchestrator | 2026-04-05 01:17:20.237177 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-05 01:17:20.237183 | orchestrator | Sunday 05 April 2026 01:06:23 +0000 (0:00:01.448) 0:00:15.420 ********** 2026-04-05 01:17:20.237189 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.237195 | orchestrator | 2026-04-05 01:17:20.237201 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 01:17:20.237207 | orchestrator | Sunday 05 April 2026 01:06:26 +0000 (0:00:02.911) 0:00:18.331 ********** 2026-04-05 01:17:20.237213 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.237219 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.237226 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.237232 | orchestrator | 2026-04-05 01:17:20.237238 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-05 01:17:20.237244 | orchestrator | Sunday 05 April 2026 01:06:27 +0000 (0:00:00.517) 0:00:18.849 ********** 2026-04-05 01:17:20.237250 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:17:20.237257 | orchestrator | 2026-04-05 01:17:20.237263 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-05 01:17:20.237269 | orchestrator | Sunday 05 April 2026 01:07:18 +0000 (0:00:51.015) 0:01:09.865 ********** 2026-04-05 01:17:20.237275 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.237281 | orchestrator | 2026-04-05 01:17:20.237287 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 01:17:20.237481 | orchestrator | Sunday 05 April 2026 01:07:33 +0000 (0:00:15.682) 0:01:25.547 ********** 2026-04-05 01:17:20.237493 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:17:20.237499 | orchestrator | 2026-04-05 01:17:20.237505 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 01:17:20.237511 | orchestrator | Sunday 05 April 2026 01:07:49 +0000 (0:00:15.392) 0:01:40.940 ********** 2026-04-05 01:17:20.237545 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:17:20.237553 | orchestrator | 2026-04-05 01:17:20.237559 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-05 01:17:20.237565 | orchestrator | Sunday 05 April 2026 01:07:49 +0000 (0:00:00.757) 0:01:41.698 ********** 2026-04-05 01:17:20.237571 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.237579 | orchestrator | 2026-04-05 01:17:20.237589 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 01:17:20.237600 | orchestrator | Sunday 05 April 2026 01:07:50 +0000 (0:00:00.664) 0:01:42.362 ********** 2026-04-05 01:17:20.237607 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:17:20.237614 | orchestrator | 2026-04-05 01:17:20.237620 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-05 01:17:20.237626 | orchestrator | Sunday 05 April 2026 01:07:51 +0000 (0:00:00.601) 0:01:42.964 ********** 2026-04-05 01:17:20.237632 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:17:20.237638 | orchestrator | 2026-04-05 01:17:20.237645 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-05 01:17:20.237651 | orchestrator | Sunday 05 April 2026 01:08:10 +0000 (0:00:19.609) 0:02:02.573 ********** 2026-04-05 01:17:20.237711 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.237719 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.237725 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.237731 | orchestrator | 2026-04-05 01:17:20.237737 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-05 01:17:20.237743 | orchestrator | 2026-04-05 01:17:20.237749 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-05 01:17:20.237755 | orchestrator | Sunday 05 April 2026 01:08:10 +0000 (0:00:00.255) 0:02:02.829 ********** 2026-04-05 01:17:20.237769 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:17:20.237776 | orchestrator | 2026-04-05 01:17:20.237782 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-05 01:17:20.237788 | orchestrator | Sunday 05 April 2026 01:08:12 +0000 (0:00:01.046) 0:02:03.876 ********** 2026-04-05 01:17:20.237794 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.237800 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.237806 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.237812 | orchestrator | 2026-04-05 01:17:20.237818 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-05 01:17:20.237825 | orchestrator | Sunday 05 April 2026 01:08:14 +0000 (0:00:02.161) 0:02:06.037 ********** 2026-04-05 01:17:20.237831 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.237837 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.237843 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.237849 | orchestrator | 2026-04-05 01:17:20.237855 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-05 01:17:20.237861 | orchestrator | Sunday 05 April 2026 01:08:16 +0000 (0:00:02.326) 0:02:08.364 ********** 2026-04-05 01:17:20.237867 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.237873 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.237879 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.237885 | orchestrator | 2026-04-05 01:17:20.237891 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-05 01:17:20.237898 | orchestrator | Sunday 05 April 2026 01:08:16 +0000 (0:00:00.448) 0:02:08.812 ********** 2026-04-05 01:17:20.237904 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 01:17:20.237910 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.237916 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 01:17:20.237922 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.237928 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 01:17:20.237934 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-05 01:17:20.237940 | orchestrator | 2026-04-05 01:17:20.237947 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-05 01:17:20.237953 | orchestrator | Sunday 05 April 2026 01:08:29 +0000 (0:00:12.911) 0:02:21.724 ********** 2026-04-05 01:17:20.237959 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.237965 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.237971 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.237977 | orchestrator | 2026-04-05 01:17:20.237983 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-05 01:17:20.237989 | orchestrator | Sunday 05 April 2026 01:08:30 +0000 (0:00:00.423) 0:02:22.147 ********** 2026-04-05 01:17:20.237995 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 01:17:20.238001 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.238007 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 01:17:20.238013 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.238056 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 01:17:20.238063 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.238418 | orchestrator | 2026-04-05 01:17:20.238428 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-05 01:17:20.238442 | orchestrator | Sunday 05 April 2026 01:08:32 +0000 (0:00:02.665) 0:02:24.813 ********** 2026-04-05 01:17:20.238449 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.238455 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.238461 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.238488 | orchestrator | 2026-04-05 01:17:20.238495 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-05 01:17:20.238501 | orchestrator | Sunday 05 April 2026 01:08:33 +0000 (0:00:00.560) 0:02:25.373 ********** 2026-04-05 01:17:20.238508 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.238514 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.238520 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.238526 | orchestrator | 2026-04-05 01:17:20.238532 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-05 01:17:20.238538 | orchestrator | Sunday 05 April 2026 01:08:34 +0000 (0:00:01.080) 0:02:26.454 ********** 2026-04-05 01:17:20.238545 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.238551 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.238580 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.238587 | orchestrator | 2026-04-05 01:17:20.238593 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-05 01:17:20.238599 | orchestrator | Sunday 05 April 2026 01:08:37 +0000 (0:00:02.552) 0:02:29.007 ********** 2026-04-05 01:17:20.238606 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.238612 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.238689 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:17:20.238698 | orchestrator | 2026-04-05 01:17:20.238704 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 01:17:20.238711 | orchestrator | Sunday 05 April 2026 01:09:02 +0000 (0:00:25.538) 0:02:54.546 ********** 2026-04-05 01:17:20.238718 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.238724 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.238730 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:17:20.238737 | orchestrator | 2026-04-05 01:17:20.238743 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 01:17:20.238750 | orchestrator | Sunday 05 April 2026 01:09:17 +0000 (0:00:14.697) 0:03:09.244 ********** 2026-04-05 01:17:20.238756 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:17:20.238763 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.238769 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.238775 | orchestrator | 2026-04-05 01:17:20.238782 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-05 01:17:20.238788 | orchestrator | Sunday 05 April 2026 01:09:19 +0000 (0:00:02.218) 0:03:11.462 ********** 2026-04-05 01:17:20.238795 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.238801 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.238808 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.238814 | orchestrator | 2026-04-05 01:17:20.238821 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-05 01:17:20.238833 | orchestrator | Sunday 05 April 2026 01:09:33 +0000 (0:00:13.883) 0:03:25.345 ********** 2026-04-05 01:17:20.238840 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.238846 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.238853 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.238859 | orchestrator | 2026-04-05 01:17:20.238866 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-05 01:17:20.238872 | orchestrator | Sunday 05 April 2026 01:09:36 +0000 (0:00:03.301) 0:03:28.646 ********** 2026-04-05 01:17:20.238879 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.238885 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.238892 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.238898 | orchestrator | 2026-04-05 01:17:20.238905 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-05 01:17:20.238918 | orchestrator | 2026-04-05 01:17:20.238924 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 01:17:20.238931 | orchestrator | Sunday 05 April 2026 01:09:37 +0000 (0:00:00.328) 0:03:28.975 ********** 2026-04-05 01:17:20.238937 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:17:20.238945 | orchestrator | 2026-04-05 01:17:20.238952 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-04-05 01:17:20.238958 | orchestrator | Sunday 05 April 2026 01:09:38 +0000 (0:00:01.025) 0:03:30.000 ********** 2026-04-05 01:17:20.238964 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-05 01:17:20.238971 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-05 01:17:20.238978 | orchestrator | 2026-04-05 01:17:20.238984 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-04-05 01:17:20.238991 | orchestrator | Sunday 05 April 2026 01:09:41 +0000 (0:00:03.632) 0:03:33.633 ********** 2026-04-05 01:17:20.238997 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-05 01:17:20.239005 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-05 01:17:20.239012 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-05 01:17:20.239018 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-05 01:17:20.239025 | orchestrator | 2026-04-05 01:17:20.239032 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-05 01:17:20.239038 | orchestrator | Sunday 05 April 2026 01:09:49 +0000 (0:00:07.758) 0:03:41.391 ********** 2026-04-05 01:17:20.239045 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:17:20.239052 | orchestrator | 2026-04-05 01:17:20.239058 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-05 01:17:20.239065 | orchestrator | Sunday 05 April 2026 01:09:52 +0000 (0:00:03.267) 0:03:44.659 ********** 2026-04-05 01:17:20.239071 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-05 01:17:20.239078 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:17:20.239084 | orchestrator | 2026-04-05 01:17:20.239091 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-05 01:17:20.239097 | orchestrator | Sunday 05 April 2026 01:09:56 +0000 (0:00:03.720) 0:03:48.379 ********** 2026-04-05 01:17:20.239104 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:17:20.239501 | orchestrator | 2026-04-05 01:17:20.239520 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-04-05 01:17:20.239529 | orchestrator | Sunday 05 April 2026 01:10:00 +0000 (0:00:03.489) 0:03:51.868 ********** 2026-04-05 01:17:20.239538 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-05 01:17:20.239547 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-05 01:17:20.239556 | orchestrator | 2026-04-05 01:17:20.239564 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-05 01:17:20.239655 | orchestrator | Sunday 05 April 2026 01:10:07 +0000 (0:00:07.510) 0:03:59.379 ********** 2026-04-05 01:17:20.239705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.239743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.239757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.239844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.239860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.239885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.239897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.239908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.239920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.239931 | orchestrator | 2026-04-05 01:17:20.240009 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-05 01:17:20.240024 | orchestrator | Sunday 05 April 2026 01:10:09 +0000 (0:00:02.248) 0:04:01.628 ********** 2026-04-05 01:17:20.240035 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.240045 | orchestrator | 2026-04-05 01:17:20.240055 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-05 01:17:20.240065 | orchestrator | Sunday 05 April 2026 01:10:09 +0000 (0:00:00.126) 0:04:01.755 ********** 2026-04-05 01:17:20.240084 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.240095 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.240105 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.240115 | orchestrator | 2026-04-05 01:17:20.240125 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-05 01:17:20.240134 | orchestrator | Sunday 05 April 2026 01:10:10 +0000 (0:00:00.310) 0:04:02.066 ********** 2026-04-05 01:17:20.240145 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:17:20.240155 | orchestrator | 2026-04-05 01:17:20.240165 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-05 01:17:20.240175 | orchestrator | Sunday 05 April 2026 01:10:11 +0000 (0:00:00.809) 0:04:02.876 ********** 2026-04-05 01:17:20.240185 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.240196 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.240206 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.240218 | orchestrator | 2026-04-05 01:17:20.240228 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 01:17:20.240239 | orchestrator | Sunday 05 April 2026 01:10:11 +0000 (0:00:00.318) 0:04:03.194 ********** 2026-04-05 01:17:20.240249 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:17:20.240260 | orchestrator | 2026-04-05 01:17:20.240277 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-05 01:17:20.240313 | orchestrator | Sunday 05 April 2026 01:10:12 +0000 (0:00:00.904) 0:04:04.099 ********** 2026-04-05 01:17:20.240327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.240339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.240447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.240492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.240505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.240516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.240599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.240622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.240633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.240643 | orchestrator | 2026-04-05 01:17:20.240659 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-05 01:17:20.240691 | orchestrator | Sunday 05 April 2026 01:10:15 +0000 (0:00:03.114) 0:04:07.213 ********** 2026-04-05 01:17:20.240703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.240715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.240762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.240775 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.240860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.240883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.240895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.240906 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.240915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.241003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.241026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.241037 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.241047 | orchestrator | 2026-04-05 01:17:20.241057 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-05 01:17:20.241068 | orchestrator | Sunday 05 April 2026 01:10:16 +0000 (0:00:00.793) 0:04:08.006 ********** 2026-04-05 01:17:20.241079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.241090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.241198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.241216 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.241228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.241246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.241258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.241277 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.241287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.241347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.241361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.241368 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.241374 | orchestrator | 2026-04-05 01:17:20.241381 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-05 01:17:20.241387 | orchestrator | Sunday 05 April 2026 01:10:18 +0000 (0:00:02.002) 0:04:10.009 ********** 2026-04-05 01:17:20.241394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.241605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.241625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.241636 | orchestrator | 2026-04-05 01:17:20.241647 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-05 01:17:20.241657 | orchestrator | Sunday 05 April 2026 01:10:21 +0000 (0:00:03.498) 0:04:13.507 ********** 2026-04-05 01:17:20.241718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.241984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.242001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.242039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.242047 | orchestrator | 2026-04-05 01:17:20.242054 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-05 01:17:20.242061 | orchestrator | Sunday 05 April 2026 01:10:35 +0000 (0:00:13.672) 0:04:27.180 ********** 2026-04-05 01:17:20.242067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.242082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.242132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.242141 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.242152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.242159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.242171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.242178 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.242203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.242211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.242221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.242232 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.242239 | orchestrator | 2026-04-05 01:17:20.242245 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-05 01:17:20.242252 | orchestrator | Sunday 05 April 2026 01:10:37 +0000 (0:00:02.176) 0:04:29.357 ********** 2026-04-05 01:17:20.242258 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.242265 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.242271 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.242277 | orchestrator | 2026-04-05 01:17:20.242283 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-05 01:17:20.242289 | orchestrator | Sunday 05 April 2026 01:10:40 +0000 (0:00:02.597) 0:04:31.954 ********** 2026-04-05 01:17:20.242296 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.242302 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.242308 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.242314 | orchestrator | 2026-04-05 01:17:20.242320 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-05 01:17:20.242326 | orchestrator | Sunday 05 April 2026 01:10:41 +0000 (0:00:01.012) 0:04:32.967 ********** 2026-04-05 01:17:20.242333 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-05 01:17:20.242339 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-05 01:17:20.242345 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-05 01:17:20.242352 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-05 01:17:20.242358 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.242364 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.242370 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-05 01:17:20.242376 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-05 01:17:20.242382 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.242389 | orchestrator | 2026-04-05 01:17:20.242395 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-05 01:17:20.242401 | orchestrator | Sunday 05 April 2026 01:10:41 +0000 (0:00:00.417) 0:04:33.384 ********** 2026-04-05 01:17:20.242407 | orchestrator | included: service-uwsgi-config for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-05 01:17:20.242416 | orchestrator | included: service-uwsgi-config for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-05 01:17:20.242422 | orchestrator | 2026-04-05 01:17:20.242428 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-05 01:17:20.242434 | orchestrator | Sunday 05 April 2026 01:10:46 +0000 (0:00:04.567) 0:04:37.951 ********** 2026-04-05 01:17:20.242440 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:17:20.242447 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.242453 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:17:20.242459 | orchestrator | 2026-04-05 01:17:20.242465 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-05 01:17:20.242472 | orchestrator | Sunday 05 April 2026 01:10:48 +0000 (0:00:02.584) 0:04:40.536 ********** 2026-04-05 01:17:20.242478 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.242484 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:17:20.242490 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:17:20.242496 | orchestrator | 2026-04-05 01:17:20.242502 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-05 01:17:20.242508 | orchestrator | Sunday 05 April 2026 01:10:51 +0000 (0:00:02.727) 0:04:43.264 ********** 2026-04-05 01:17:20.242535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.242552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.242559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.242585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.242598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.242609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:17:20.242618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.242625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.242633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.242641 | orchestrator | 2026-04-05 01:17:20.242648 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-05 01:17:20.242702 | orchestrator | Sunday 05 April 2026 01:10:54 +0000 (0:00:03.152) 0:04:46.417 ********** 2026-04-05 01:17:20.242712 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:17:20.242719 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:17:20.242727 | orchestrator | } 2026-04-05 01:17:20.242735 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:17:20.242742 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:17:20.242749 | orchestrator | } 2026-04-05 01:17:20.242757 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:17:20.242764 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:17:20.242771 | orchestrator | } 2026-04-05 01:17:20.242779 | orchestrator | 2026-04-05 01:17:20.242786 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:17:20.242795 | orchestrator | Sunday 05 April 2026 01:10:54 +0000 (0:00:00.341) 0:04:46.759 ********** 2026-04-05 01:17:20.242816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.242850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.242862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.242873 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.242912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.242934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.242951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.242963 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.242973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.242985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:17:20.243032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.243045 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.243055 | orchestrator | 2026-04-05 01:17:20.243065 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 01:17:20.243076 | orchestrator | Sunday 05 April 2026 01:10:56 +0000 (0:00:01.230) 0:04:47.989 ********** 2026-04-05 01:17:20.243086 | orchestrator | 2026-04-05 01:17:20.243096 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 01:17:20.243106 | orchestrator | Sunday 05 April 2026 01:10:56 +0000 (0:00:00.164) 0:04:48.153 ********** 2026-04-05 01:17:20.243116 | orchestrator | 2026-04-05 01:17:20.243124 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 01:17:20.243133 | orchestrator | Sunday 05 April 2026 01:10:56 +0000 (0:00:00.129) 0:04:48.283 ********** 2026-04-05 01:17:20.243144 | orchestrator | 2026-04-05 01:17:20.243153 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-05 01:17:20.243243 | orchestrator | Sunday 05 April 2026 01:10:56 +0000 (0:00:00.134) 0:04:48.418 ********** 2026-04-05 01:17:20.243253 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.243263 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:17:20.243279 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:17:20.243289 | orchestrator | 2026-04-05 01:17:20.243300 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-05 01:17:20.243310 | orchestrator | Sunday 05 April 2026 01:11:20 +0000 (0:00:24.027) 0:05:12.445 ********** 2026-04-05 01:17:20.243320 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:17:20.243330 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.243340 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:17:20.243349 | orchestrator | 2026-04-05 01:17:20.243360 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-04-05 01:17:20.243370 | orchestrator | Sunday 05 April 2026 01:11:33 +0000 (0:00:13.229) 0:05:25.674 ********** 2026-04-05 01:17:20.243380 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:17:20.243391 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:17:20.243400 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.243411 | orchestrator | 2026-04-05 01:17:20.243420 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-05 01:17:20.243430 | orchestrator | 2026-04-05 01:17:20.243440 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 01:17:20.243450 | orchestrator | Sunday 05 April 2026 01:11:45 +0000 (0:00:11.490) 0:05:37.165 ********** 2026-04-05 01:17:20.243460 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:17:20.243470 | orchestrator | 2026-04-05 01:17:20.243480 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 01:17:20.243501 | orchestrator | Sunday 05 April 2026 01:11:46 +0000 (0:00:01.268) 0:05:38.433 ********** 2026-04-05 01:17:20.243511 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.243520 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.243529 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.243538 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.243548 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.243559 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.243569 | orchestrator | 2026-04-05 01:17:20.243580 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-05 01:17:20.243590 | orchestrator | Sunday 05 April 2026 01:11:47 +0000 (0:00:00.516) 0:05:38.950 ********** 2026-04-05 01:17:20.243600 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.243611 | orchestrator | 2026-04-05 01:17:20.243621 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-05 01:17:20.243631 | orchestrator | Sunday 05 April 2026 01:12:12 +0000 (0:00:25.532) 0:06:04.482 ********** 2026-04-05 01:17:20.243641 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:17:20.243652 | orchestrator | 2026-04-05 01:17:20.243662 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-05 01:17:20.243697 | orchestrator | Sunday 05 April 2026 01:12:13 +0000 (0:00:01.257) 0:06:05.740 ********** 2026-04-05 01:17:20.243708 | orchestrator | included: service-image-info for testbed-node-3 2026-04-05 01:17:20.243719 | orchestrator | 2026-04-05 01:17:20.243729 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-05 01:17:20.243740 | orchestrator | Sunday 05 April 2026 01:12:14 +0000 (0:00:00.858) 0:06:06.598 ********** 2026-04-05 01:17:20.243750 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:17:20.243760 | orchestrator | 2026-04-05 01:17:20.243770 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-05 01:17:20.243781 | orchestrator | Sunday 05 April 2026 01:12:18 +0000 (0:00:03.419) 0:06:10.017 ********** 2026-04-05 01:17:20.243790 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:17:20.243801 | orchestrator | 2026-04-05 01:17:20.243810 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-05 01:17:20.243816 | orchestrator | Sunday 05 April 2026 01:12:20 +0000 (0:00:01.897) 0:06:11.914 ********** 2026-04-05 01:17:20.243822 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.243828 | orchestrator | 2026-04-05 01:17:20.243835 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-05 01:17:20.243841 | orchestrator | Sunday 05 April 2026 01:12:22 +0000 (0:00:02.099) 0:06:14.014 ********** 2026-04-05 01:17:20.243847 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.243853 | orchestrator | 2026-04-05 01:17:20.243859 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-05 01:17:20.243907 | orchestrator | Sunday 05 April 2026 01:12:24 +0000 (0:00:02.137) 0:06:16.152 ********** 2026-04-05 01:17:20.243915 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.243921 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.243927 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.243933 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:17:20.243939 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:17:20.243945 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:17:20.243951 | orchestrator | 2026-04-05 01:17:20.243962 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-05 01:17:20.243972 | orchestrator | Sunday 05 April 2026 01:12:29 +0000 (0:00:04.690) 0:06:20.842 ********** 2026-04-05 01:17:20.243987 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.244003 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.244012 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.244022 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.244031 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.244042 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.244052 | orchestrator | 2026-04-05 01:17:20.244072 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-05 01:17:20.244081 | orchestrator | Sunday 05 April 2026 01:12:31 +0000 (0:00:02.298) 0:06:23.141 ********** 2026-04-05 01:17:20.244091 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.244100 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.244110 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.244120 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.244130 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.244140 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.244152 | orchestrator | 2026-04-05 01:17:20.244163 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-05 01:17:20.244174 | orchestrator | Sunday 05 April 2026 01:12:34 +0000 (0:00:02.957) 0:06:26.098 ********** 2026-04-05 01:17:20.244192 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.244203 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.244214 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.244225 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:17:20.244237 | orchestrator | 2026-04-05 01:17:20.244249 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 01:17:20.244260 | orchestrator | Sunday 05 April 2026 01:12:35 +0000 (0:00:00.880) 0:06:26.979 ********** 2026-04-05 01:17:20.244272 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-05 01:17:20.244280 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-05 01:17:20.244286 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-05 01:17:20.244292 | orchestrator | 2026-04-05 01:17:20.244298 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 01:17:20.244304 | orchestrator | Sunday 05 April 2026 01:12:36 +0000 (0:00:01.062) 0:06:28.042 ********** 2026-04-05 01:17:20.244311 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-05 01:17:20.244317 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-05 01:17:20.244323 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-05 01:17:20.244329 | orchestrator | 2026-04-05 01:17:20.244335 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 01:17:20.244341 | orchestrator | Sunday 05 April 2026 01:12:37 +0000 (0:00:01.246) 0:06:29.289 ********** 2026-04-05 01:17:20.244347 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-05 01:17:20.244353 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.244360 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-05 01:17:20.244366 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.244372 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-05 01:17:20.244378 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.244384 | orchestrator | 2026-04-05 01:17:20.244390 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-05 01:17:20.244396 | orchestrator | Sunday 05 April 2026 01:12:38 +0000 (0:00:00.577) 0:06:29.866 ********** 2026-04-05 01:17:20.244402 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 01:17:20.244408 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 01:17:20.244415 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 01:17:20.244421 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 01:17:20.244427 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.244433 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 01:17:20.244439 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 01:17:20.244445 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.244451 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 01:17:20.244495 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 01:17:20.244501 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 01:17:20.244508 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.244514 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 01:17:20.244520 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 01:17:20.244526 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 01:17:20.244532 | orchestrator | 2026-04-05 01:17:20.244538 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-05 01:17:20.244544 | orchestrator | Sunday 05 April 2026 01:12:39 +0000 (0:00:01.575) 0:06:31.442 ********** 2026-04-05 01:17:20.244550 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.244557 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.244563 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.244600 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.244607 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.244613 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.244619 | orchestrator | 2026-04-05 01:17:20.244626 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-05 01:17:20.244632 | orchestrator | Sunday 05 April 2026 01:12:41 +0000 (0:00:01.395) 0:06:32.837 ********** 2026-04-05 01:17:20.244638 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.244644 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.244650 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.244656 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.244662 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.244719 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.244726 | orchestrator | 2026-04-05 01:17:20.244732 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-05 01:17:20.244738 | orchestrator | Sunday 05 April 2026 01:12:43 +0000 (0:00:02.273) 0:06:35.110 ********** 2026-04-05 01:17:20.244753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244879 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.244994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245006 | orchestrator | 2026-04-05 01:17:20.245012 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 01:17:20.245019 | orchestrator | Sunday 05 April 2026 01:12:46 +0000 (0:00:03.043) 0:06:38.154 ********** 2026-04-05 01:17:20.245025 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:17:20.245033 | orchestrator | 2026-04-05 01:17:20.245039 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-05 01:17:20.245045 | orchestrator | Sunday 05 April 2026 01:12:47 +0000 (0:00:01.296) 0:06:39.451 ********** 2026-04-05 01:17:20.245071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245099 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245168 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.245233 | orchestrator | 2026-04-05 01:17:20.245239 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-05 01:17:20.245246 | orchestrator | Sunday 05 April 2026 01:12:52 +0000 (0:00:04.726) 0:06:44.177 ********** 2026-04-05 01:17:20.245257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.245264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.245271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.245295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245302 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.245309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.245319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.245330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.245336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245349 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.245355 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.245380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.245390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245401 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.245408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.245414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.245421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245427 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.245434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245440 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.245447 | orchestrator | 2026-04-05 01:17:20.245454 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-05 01:17:20.245465 | orchestrator | Sunday 05 April 2026 01:12:54 +0000 (0:00:02.245) 0:06:46.422 ********** 2026-04-05 01:17:20.245510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.245536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.245546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.245557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.245567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.245606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245618 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.245628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.245644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245651 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.245657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245664 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.245689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.245696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.245723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245735 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.245742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245748 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.245761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.245767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.245774 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.245780 | orchestrator | 2026-04-05 01:17:20.245786 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 01:17:20.245792 | orchestrator | Sunday 05 April 2026 01:12:57 +0000 (0:00:02.456) 0:06:48.878 ********** 2026-04-05 01:17:20.245799 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.245805 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.245811 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.245817 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:17:20.245824 | orchestrator | 2026-04-05 01:17:20.245830 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-05 01:17:20.245836 | orchestrator | Sunday 05 April 2026 01:12:58 +0000 (0:00:01.031) 0:06:49.910 ********** 2026-04-05 01:17:20.245843 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 01:17:20.245849 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 01:17:20.245855 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 01:17:20.245861 | orchestrator | 2026-04-05 01:17:20.245867 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-05 01:17:20.245874 | orchestrator | Sunday 05 April 2026 01:12:59 +0000 (0:00:01.097) 0:06:51.007 ********** 2026-04-05 01:17:20.245880 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 01:17:20.245886 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 01:17:20.245892 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 01:17:20.245898 | orchestrator | 2026-04-05 01:17:20.245904 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-05 01:17:20.245910 | orchestrator | Sunday 05 April 2026 01:13:00 +0000 (0:00:01.044) 0:06:52.052 ********** 2026-04-05 01:17:20.245917 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:17:20.245923 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:17:20.245929 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:17:20.245935 | orchestrator | 2026-04-05 01:17:20.245946 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-05 01:17:20.245952 | orchestrator | Sunday 05 April 2026 01:13:01 +0000 (0:00:00.811) 0:06:52.864 ********** 2026-04-05 01:17:20.245958 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:17:20.245964 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:17:20.245971 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:17:20.245977 | orchestrator | 2026-04-05 01:17:20.245983 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-05 01:17:20.245989 | orchestrator | Sunday 05 April 2026 01:13:01 +0000 (0:00:00.500) 0:06:53.365 ********** 2026-04-05 01:17:20.245995 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-05 01:17:20.246002 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-05 01:17:20.246008 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-05 01:17:20.246036 | orchestrator | 2026-04-05 01:17:20.246064 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-05 01:17:20.246072 | orchestrator | Sunday 05 April 2026 01:13:02 +0000 (0:00:01.220) 0:06:54.585 ********** 2026-04-05 01:17:20.246078 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-05 01:17:20.246084 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-05 01:17:20.246091 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-05 01:17:20.246097 | orchestrator | 2026-04-05 01:17:20.246103 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-05 01:17:20.246110 | orchestrator | Sunday 05 April 2026 01:13:03 +0000 (0:00:01.189) 0:06:55.775 ********** 2026-04-05 01:17:20.246116 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-05 01:17:20.246122 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-05 01:17:20.246129 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-05 01:17:20.246135 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-05 01:17:20.246141 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-05 01:17:20.246147 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-05 01:17:20.246153 | orchestrator | 2026-04-05 01:17:20.246160 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-05 01:17:20.246166 | orchestrator | Sunday 05 April 2026 01:13:07 +0000 (0:00:04.040) 0:06:59.815 ********** 2026-04-05 01:17:20.246172 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.246179 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.246189 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.246199 | orchestrator | 2026-04-05 01:17:20.246209 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-05 01:17:20.246234 | orchestrator | Sunday 05 April 2026 01:13:08 +0000 (0:00:00.351) 0:07:00.167 ********** 2026-04-05 01:17:20.246244 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.246254 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.246264 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.246275 | orchestrator | 2026-04-05 01:17:20.246286 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-05 01:17:20.246296 | orchestrator | Sunday 05 April 2026 01:13:08 +0000 (0:00:00.311) 0:07:00.478 ********** 2026-04-05 01:17:20.246306 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.246316 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.246325 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.246331 | orchestrator | 2026-04-05 01:17:20.246338 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-05 01:17:20.246344 | orchestrator | Sunday 05 April 2026 01:13:10 +0000 (0:00:01.659) 0:07:02.137 ********** 2026-04-05 01:17:20.246350 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-05 01:17:20.246358 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-05 01:17:20.246371 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-05 01:17:20.246379 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-05 01:17:20.246385 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-05 01:17:20.246392 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-05 01:17:20.246398 | orchestrator | 2026-04-05 01:17:20.246404 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-05 01:17:20.246410 | orchestrator | Sunday 05 April 2026 01:13:13 +0000 (0:00:03.656) 0:07:05.794 ********** 2026-04-05 01:17:20.246416 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:17:20.246423 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:17:20.246429 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:17:20.246435 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:17:20.246441 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.246447 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:17:20.246453 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.246460 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:17:20.246466 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.246475 | orchestrator | 2026-04-05 01:17:20.246482 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-05 01:17:20.246488 | orchestrator | Sunday 05 April 2026 01:13:19 +0000 (0:00:05.169) 0:07:10.964 ********** 2026-04-05 01:17:20.246494 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.246500 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.246506 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.246513 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-5, testbed-node-4, testbed-node-3 2026-04-05 01:17:20.246519 | orchestrator | 2026-04-05 01:17:20.246548 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-05 01:17:20.246555 | orchestrator | Sunday 05 April 2026 01:13:21 +0000 (0:00:02.309) 0:07:13.274 ********** 2026-04-05 01:17:20.246561 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 01:17:20.246567 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 01:17:20.246573 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 01:17:20.246580 | orchestrator | 2026-04-05 01:17:20.246586 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-05 01:17:20.246592 | orchestrator | Sunday 05 April 2026 01:13:23 +0000 (0:00:01.997) 0:07:15.273 ********** 2026-04-05 01:17:20.246598 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.246604 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.246610 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.246616 | orchestrator | 2026-04-05 01:17:20.246622 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-05 01:17:20.246628 | orchestrator | Sunday 05 April 2026 01:13:23 +0000 (0:00:00.370) 0:07:15.643 ********** 2026-04-05 01:17:20.246634 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.246640 | orchestrator | 2026-04-05 01:17:20.246646 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-05 01:17:20.246652 | orchestrator | Sunday 05 April 2026 01:13:23 +0000 (0:00:00.108) 0:07:15.752 ********** 2026-04-05 01:17:20.246663 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.246696 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.246702 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.246709 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.246715 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.246721 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.246727 | orchestrator | 2026-04-05 01:17:20.246733 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-05 01:17:20.246743 | orchestrator | Sunday 05 April 2026 01:13:24 +0000 (0:00:00.930) 0:07:16.682 ********** 2026-04-05 01:17:20.246749 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 01:17:20.246755 | orchestrator | 2026-04-05 01:17:20.246762 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-05 01:17:20.246768 | orchestrator | Sunday 05 April 2026 01:13:25 +0000 (0:00:00.948) 0:07:17.631 ********** 2026-04-05 01:17:20.246774 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.246780 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.246786 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.246792 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.246798 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.246804 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.246810 | orchestrator | 2026-04-05 01:17:20.246816 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-05 01:17:20.246822 | orchestrator | Sunday 05 April 2026 01:13:26 +0000 (0:00:00.849) 0:07:18.481 ********** 2026-04-05 01:17:20.246829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246836 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246887 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246906 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246930 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.246989 | orchestrator | 2026-04-05 01:17:20.246995 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-05 01:17:20.247020 | orchestrator | Sunday 05 April 2026 01:13:31 +0000 (0:00:05.270) 0:07:23.752 ********** 2026-04-05 01:17:20.247027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.247037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.247044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.247050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.247057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.247072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.247078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.247088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.247094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.247101 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.247107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.247124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.247131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.247143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.247150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.247156 | orchestrator | 2026-04-05 01:17:20.247162 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-05 01:17:20.247168 | orchestrator | Sunday 05 April 2026 01:13:39 +0000 (0:00:07.965) 0:07:31.717 ********** 2026-04-05 01:17:20.247175 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.247181 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.247187 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.247193 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.247199 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.247205 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.247212 | orchestrator | 2026-04-05 01:17:20.247218 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-05 01:17:20.247224 | orchestrator | Sunday 05 April 2026 01:13:41 +0000 (0:00:01.946) 0:07:33.663 ********** 2026-04-05 01:17:20.247230 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 01:17:20.247236 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 01:17:20.247249 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 01:17:20.247255 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 01:17:20.247262 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 01:17:20.247268 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 01:17:20.247274 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 01:17:20.247282 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.247293 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 01:17:20.247303 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.247314 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 01:17:20.247325 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.247336 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 01:17:20.247347 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 01:17:20.247358 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 01:17:20.247370 | orchestrator | 2026-04-05 01:17:20.247381 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-05 01:17:20.247398 | orchestrator | Sunday 05 April 2026 01:13:45 +0000 (0:00:04.111) 0:07:37.775 ********** 2026-04-05 01:17:20.247407 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.247413 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.247419 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.247425 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.247431 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.247437 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.247443 | orchestrator | 2026-04-05 01:17:20.247449 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-05 01:17:20.247456 | orchestrator | Sunday 05 April 2026 01:13:46 +0000 (0:00:00.800) 0:07:38.575 ********** 2026-04-05 01:17:20.247462 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 01:17:20.247468 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 01:17:20.247475 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 01:17:20.247481 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 01:17:20.247487 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 01:17:20.247493 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 01:17:20.247503 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 01:17:20.247509 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 01:17:20.247515 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 01:17:20.247522 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 01:17:20.247528 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.247534 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 01:17:20.247540 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.247551 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 01:17:20.247557 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.247563 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:17:20.247569 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:17:20.247575 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:17:20.247581 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:17:20.247588 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:17:20.247594 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:17:20.247600 | orchestrator | 2026-04-05 01:17:20.247606 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-05 01:17:20.247612 | orchestrator | Sunday 05 April 2026 01:13:52 +0000 (0:00:05.498) 0:07:44.074 ********** 2026-04-05 01:17:20.247618 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 01:17:20.247624 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 01:17:20.247630 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 01:17:20.247636 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 01:17:20.247642 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:17:20.247648 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 01:17:20.247654 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:17:20.247660 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:17:20.247686 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 01:17:20.247692 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 01:17:20.247698 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 01:17:20.247704 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 01:17:20.247714 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 01:17:20.247721 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.247727 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:17:20.247733 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 01:17:20.247739 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.247745 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 01:17:20.247751 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.247758 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:17:20.247764 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:17:20.247770 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:17:20.247776 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:17:20.247782 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:17:20.247793 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:17:20.247799 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:17:20.247805 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:17:20.247811 | orchestrator | 2026-04-05 01:17:20.247817 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-05 01:17:20.247827 | orchestrator | Sunday 05 April 2026 01:13:59 +0000 (0:00:07.220) 0:07:51.295 ********** 2026-04-05 01:17:20.247833 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.247839 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.247845 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.247851 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.247857 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.247863 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.247869 | orchestrator | 2026-04-05 01:17:20.247876 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-05 01:17:20.247882 | orchestrator | Sunday 05 April 2026 01:14:00 +0000 (0:00:00.643) 0:07:51.938 ********** 2026-04-05 01:17:20.247888 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.247894 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.247900 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.247906 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.247912 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.247918 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.247924 | orchestrator | 2026-04-05 01:17:20.247931 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-05 01:17:20.247937 | orchestrator | Sunday 05 April 2026 01:14:01 +0000 (0:00:01.087) 0:07:53.026 ********** 2026-04-05 01:17:20.247943 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.247949 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.247955 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.247961 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.247967 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.247974 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.247980 | orchestrator | 2026-04-05 01:17:20.247986 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-05 01:17:20.247992 | orchestrator | Sunday 05 April 2026 01:14:04 +0000 (0:00:02.880) 0:07:55.906 ********** 2026-04-05 01:17:20.247998 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.248004 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.248010 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.248017 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.248023 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.248029 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.248035 | orchestrator | 2026-04-05 01:17:20.248041 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-05 01:17:20.248048 | orchestrator | Sunday 05 April 2026 01:14:06 +0000 (0:00:02.751) 0:07:58.657 ********** 2026-04-05 01:17:20.248054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.248071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.248078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.248084 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.248091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.248098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.248126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.248133 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.248143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.248154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.248163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.248169 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.248176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.248182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.248189 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.248195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.248205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.248211 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.248222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.248228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.248235 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.248241 | orchestrator | 2026-04-05 01:17:20.248247 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-05 01:17:20.248257 | orchestrator | Sunday 05 April 2026 01:14:08 +0000 (0:00:01.409) 0:08:00.067 ********** 2026-04-05 01:17:20.248263 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-05 01:17:20.248269 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-05 01:17:20.248275 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.248282 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-05 01:17:20.248288 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-05 01:17:20.248294 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.248300 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-05 01:17:20.248306 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-05 01:17:20.248312 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.248318 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-05 01:17:20.248324 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-05 01:17:20.248330 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.248336 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-05 01:17:20.248343 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-05 01:17:20.248349 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.248355 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-05 01:17:20.248361 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-05 01:17:20.248367 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.248373 | orchestrator | 2026-04-05 01:17:20.248379 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-05 01:17:20.248385 | orchestrator | Sunday 05 April 2026 01:14:09 +0000 (0:00:00.910) 0:08:00.977 ********** 2026-04-05 01:17:20.248396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248467 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:17:20.248580 | orchestrator | 2026-04-05 01:17:20.248592 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-05 01:17:20.248602 | orchestrator | Sunday 05 April 2026 01:14:12 +0000 (0:00:02.992) 0:08:03.969 ********** 2026-04-05 01:17:20.248613 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 01:17:20.248625 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:17:20.248636 | orchestrator | } 2026-04-05 01:17:20.248649 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 01:17:20.248655 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:17:20.248662 | orchestrator | } 2026-04-05 01:17:20.248713 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 01:17:20.248723 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:17:20.248734 | orchestrator | } 2026-04-05 01:17:20.248751 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:17:20.248760 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:17:20.248770 | orchestrator | } 2026-04-05 01:17:20.248779 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:17:20.248790 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:17:20.248799 | orchestrator | } 2026-04-05 01:17:20.248808 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:17:20.248817 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:17:20.248827 | orchestrator | } 2026-04-05 01:17:20.248836 | orchestrator | 2026-04-05 01:17:20.248852 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:17:20.248872 | orchestrator | Sunday 05 April 2026 01:14:13 +0000 (0:00:00.886) 0:08:04.856 ********** 2026-04-05 01:17:20.248883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.248895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.248905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.248916 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.248935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.248947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.248962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.248982 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.248993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:17:20.249004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:17:20.249014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.249025 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.249039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.249047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.249062 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.249069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.249075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.249081 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.249088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:17:20.249094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:17:20.249101 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.249107 | orchestrator | 2026-04-05 01:17:20.249113 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 01:17:20.249119 | orchestrator | Sunday 05 April 2026 01:14:14 +0000 (0:00:01.963) 0:08:06.820 ********** 2026-04-05 01:17:20.249125 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.249132 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.249138 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.249144 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.249153 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.249159 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.249166 | orchestrator | 2026-04-05 01:17:20.249172 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:17:20.249178 | orchestrator | Sunday 05 April 2026 01:14:15 +0000 (0:00:00.547) 0:08:07.367 ********** 2026-04-05 01:17:20.249184 | orchestrator | 2026-04-05 01:17:20.249190 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:17:20.249201 | orchestrator | Sunday 05 April 2026 01:14:15 +0000 (0:00:00.134) 0:08:07.501 ********** 2026-04-05 01:17:20.249208 | orchestrator | 2026-04-05 01:17:20.249216 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:17:20.249227 | orchestrator | Sunday 05 April 2026 01:14:15 +0000 (0:00:00.145) 0:08:07.647 ********** 2026-04-05 01:17:20.249236 | orchestrator | 2026-04-05 01:17:20.249246 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:17:20.249256 | orchestrator | Sunday 05 April 2026 01:14:16 +0000 (0:00:00.227) 0:08:07.874 ********** 2026-04-05 01:17:20.249266 | orchestrator | 2026-04-05 01:17:20.249276 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:17:20.249287 | orchestrator | Sunday 05 April 2026 01:14:16 +0000 (0:00:00.126) 0:08:08.001 ********** 2026-04-05 01:17:20.249297 | orchestrator | 2026-04-05 01:17:20.249308 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:17:20.249315 | orchestrator | Sunday 05 April 2026 01:14:16 +0000 (0:00:00.119) 0:08:08.120 ********** 2026-04-05 01:17:20.249321 | orchestrator | 2026-04-05 01:17:20.249327 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-05 01:17:20.249333 | orchestrator | Sunday 05 April 2026 01:14:16 +0000 (0:00:00.123) 0:08:08.244 ********** 2026-04-05 01:17:20.249339 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.249349 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:17:20.249355 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:17:20.249361 | orchestrator | 2026-04-05 01:17:20.249368 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-05 01:17:20.249374 | orchestrator | Sunday 05 April 2026 01:14:28 +0000 (0:00:12.177) 0:08:20.421 ********** 2026-04-05 01:17:20.249380 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:17:20.249386 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:17:20.249392 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.249398 | orchestrator | 2026-04-05 01:17:20.249404 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-05 01:17:20.249410 | orchestrator | Sunday 05 April 2026 01:14:49 +0000 (0:00:20.707) 0:08:41.129 ********** 2026-04-05 01:17:20.249417 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.249423 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.249429 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.249435 | orchestrator | 2026-04-05 01:17:20.249441 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-05 01:17:20.249447 | orchestrator | Sunday 05 April 2026 01:15:07 +0000 (0:00:18.697) 0:08:59.827 ********** 2026-04-05 01:17:20.249453 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.249459 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.249466 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.249472 | orchestrator | 2026-04-05 01:17:20.249478 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-05 01:17:20.249484 | orchestrator | Sunday 05 April 2026 01:15:35 +0000 (0:00:27.455) 0:09:27.282 ********** 2026-04-05 01:17:20.249490 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.249496 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-05 01:17:20.249503 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-04-05 01:17:20.249510 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.249516 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.249522 | orchestrator | 2026-04-05 01:17:20.249528 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-05 01:17:20.249534 | orchestrator | Sunday 05 April 2026 01:15:41 +0000 (0:00:06.189) 0:09:33.472 ********** 2026-04-05 01:17:20.249540 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.249546 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.249552 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.249563 | orchestrator | 2026-04-05 01:17:20.249569 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-05 01:17:20.249575 | orchestrator | Sunday 05 April 2026 01:15:42 +0000 (0:00:00.950) 0:09:34.422 ********** 2026-04-05 01:17:20.249582 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:17:20.249588 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:17:20.249594 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:17:20.249600 | orchestrator | 2026-04-05 01:17:20.249606 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-05 01:17:20.249613 | orchestrator | Sunday 05 April 2026 01:16:03 +0000 (0:00:21.362) 0:09:55.785 ********** 2026-04-05 01:17:20.249619 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.249625 | orchestrator | 2026-04-05 01:17:20.249631 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-05 01:17:20.249637 | orchestrator | Sunday 05 April 2026 01:16:04 +0000 (0:00:00.131) 0:09:55.917 ********** 2026-04-05 01:17:20.249643 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.249650 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.249656 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.249662 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.249687 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.249694 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-05 01:17:20.249701 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:17:20.249707 | orchestrator | 2026-04-05 01:17:20.249714 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-05 01:17:20.249720 | orchestrator | Sunday 05 April 2026 01:16:26 +0000 (0:00:21.926) 0:10:17.843 ********** 2026-04-05 01:17:20.249730 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.249737 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.249743 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.249749 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.249755 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.249761 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.249767 | orchestrator | 2026-04-05 01:17:20.249774 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-05 01:17:20.249780 | orchestrator | Sunday 05 April 2026 01:16:37 +0000 (0:00:11.875) 0:10:29.718 ********** 2026-04-05 01:17:20.249786 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.249792 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.249798 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.249805 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.249811 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.249817 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-04-05 01:17:20.249823 | orchestrator | 2026-04-05 01:17:20.249830 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 01:17:20.249836 | orchestrator | Sunday 05 April 2026 01:16:42 +0000 (0:00:04.817) 0:10:34.536 ********** 2026-04-05 01:17:20.249842 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:17:20.249848 | orchestrator | 2026-04-05 01:17:20.249854 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 01:17:20.249861 | orchestrator | Sunday 05 April 2026 01:16:57 +0000 (0:00:14.365) 0:10:48.901 ********** 2026-04-05 01:17:20.249867 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:17:20.249873 | orchestrator | 2026-04-05 01:17:20.249879 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-05 01:17:20.249889 | orchestrator | Sunday 05 April 2026 01:16:58 +0000 (0:00:01.340) 0:10:50.242 ********** 2026-04-05 01:17:20.249895 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.249901 | orchestrator | 2026-04-05 01:17:20.249908 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-05 01:17:20.249918 | orchestrator | Sunday 05 April 2026 01:16:59 +0000 (0:00:01.341) 0:10:51.583 ********** 2026-04-05 01:17:20.249924 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:17:20.249930 | orchestrator | 2026-04-05 01:17:20.249937 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-05 01:17:20.249943 | orchestrator | 2026-04-05 01:17:20.249949 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-05 01:17:20.249955 | orchestrator | Sunday 05 April 2026 01:17:12 +0000 (0:00:12.622) 0:11:04.205 ********** 2026-04-05 01:17:20.249961 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:17:20.249967 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:17:20.249973 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:17:20.249980 | orchestrator | 2026-04-05 01:17:20.249986 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-05 01:17:20.249992 | orchestrator | 2026-04-05 01:17:20.249998 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-05 01:17:20.250005 | orchestrator | Sunday 05 April 2026 01:17:13 +0000 (0:00:01.223) 0:11:05.428 ********** 2026-04-05 01:17:20.250011 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.250041 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.250048 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.250054 | orchestrator | 2026-04-05 01:17:20.250060 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-05 01:17:20.250067 | orchestrator | 2026-04-05 01:17:20.250073 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-05 01:17:20.250079 | orchestrator | Sunday 05 April 2026 01:17:14 +0000 (0:00:00.567) 0:11:05.995 ********** 2026-04-05 01:17:20.250086 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-05 01:17:20.250092 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-05 01:17:20.250098 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-05 01:17:20.250104 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-05 01:17:20.250110 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-05 01:17:20.250117 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-05 01:17:20.250123 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:17:20.250129 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-05 01:17:20.250135 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-05 01:17:20.250142 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-05 01:17:20.250148 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-05 01:17:20.250154 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-05 01:17:20.250160 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-05 01:17:20.250166 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-05 01:17:20.250172 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-05 01:17:20.250178 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-05 01:17:20.250185 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-05 01:17:20.250191 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-05 01:17:20.250197 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-05 01:17:20.250203 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:17:20.250210 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-05 01:17:20.250216 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-05 01:17:20.250222 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-05 01:17:20.250228 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-05 01:17:20.250234 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-05 01:17:20.250250 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-05 01:17:20.250261 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:17:20.250276 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-05 01:17:20.250288 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-05 01:17:20.250297 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-05 01:17:20.250308 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-05 01:17:20.250317 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-05 01:17:20.250325 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-05 01:17:20.250335 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.250344 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.250355 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-05 01:17:20.250364 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-05 01:17:20.250375 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-05 01:17:20.250386 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-05 01:17:20.250396 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-05 01:17:20.250406 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-05 01:17:20.250415 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.250421 | orchestrator | 2026-04-05 01:17:20.250428 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-05 01:17:20.250434 | orchestrator | 2026-04-05 01:17:20.250440 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-05 01:17:20.250450 | orchestrator | Sunday 05 April 2026 01:17:15 +0000 (0:00:01.496) 0:11:07.492 ********** 2026-04-05 01:17:20.250457 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-05 01:17:20.250463 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-05 01:17:20.250469 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.250475 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-05 01:17:20.250481 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-05 01:17:20.250487 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.250493 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-05 01:17:20.250500 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-05 01:17:20.250506 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.250512 | orchestrator | 2026-04-05 01:17:20.250518 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-05 01:17:20.250524 | orchestrator | 2026-04-05 01:17:20.250530 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-05 01:17:20.250536 | orchestrator | Sunday 05 April 2026 01:17:16 +0000 (0:00:00.771) 0:11:08.264 ********** 2026-04-05 01:17:20.250542 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.250548 | orchestrator | 2026-04-05 01:17:20.250554 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-05 01:17:20.250561 | orchestrator | 2026-04-05 01:17:20.250567 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-05 01:17:20.250573 | orchestrator | Sunday 05 April 2026 01:17:17 +0000 (0:00:00.872) 0:11:09.136 ********** 2026-04-05 01:17:20.250579 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:17:20.250585 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:17:20.250591 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:17:20.250597 | orchestrator | 2026-04-05 01:17:20.250603 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:17:20.250609 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:17:20.250617 | orchestrator | testbed-node-0 : ok=59  changed=39  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-05 01:17:20.250632 | orchestrator | testbed-node-1 : ok=32  changed=23  unreachable=0 failed=0 skipped=60  rescued=0 ignored=0 2026-04-05 01:17:20.250638 | orchestrator | testbed-node-2 : ok=32  changed=23  unreachable=0 failed=0 skipped=60  rescued=0 ignored=0 2026-04-05 01:17:20.250645 | orchestrator | testbed-node-3 : ok=47  changed=30  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2026-04-05 01:17:20.250651 | orchestrator | testbed-node-4 : ok=46  changed=29  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-05 01:17:20.250657 | orchestrator | testbed-node-5 : ok=41  changed=29  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-04-05 01:17:20.250663 | orchestrator | 2026-04-05 01:17:20.250712 | orchestrator | 2026-04-05 01:17:20.250719 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:17:20.250725 | orchestrator | Sunday 05 April 2026 01:17:17 +0000 (0:00:00.448) 0:11:09.584 ********** 2026-04-05 01:17:20.250732 | orchestrator | =============================================================================== 2026-04-05 01:17:20.250738 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 51.02s 2026-04-05 01:17:20.250744 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 27.46s 2026-04-05 01:17:20.250751 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 25.54s 2026-04-05 01:17:20.250762 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 25.53s 2026-04-05 01:17:20.250768 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.03s 2026-04-05 01:17:20.250775 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.93s 2026-04-05 01:17:20.250781 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.36s 2026-04-05 01:17:20.250787 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 20.71s 2026-04-05 01:17:20.250793 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.61s 2026-04-05 01:17:20.250799 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 18.70s 2026-04-05 01:17:20.250805 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.68s 2026-04-05 01:17:20.250811 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.39s 2026-04-05 01:17:20.250818 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.70s 2026-04-05 01:17:20.250824 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.37s 2026-04-05 01:17:20.250830 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.88s 2026-04-05 01:17:20.250836 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 13.67s 2026-04-05 01:17:20.250842 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.23s 2026-04-05 01:17:20.250848 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 12.91s 2026-04-05 01:17:20.250858 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.62s 2026-04-05 01:17:20.250865 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.18s 2026-04-05 01:17:20.250871 | orchestrator | 2026-04-05 01:17:20 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:20.250877 | orchestrator | 2026-04-05 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:23.289990 | orchestrator | 2026-04-05 01:17:23 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:23.291294 | orchestrator | 2026-04-05 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:26.336136 | orchestrator | 2026-04-05 01:17:26 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:26.336246 | orchestrator | 2026-04-05 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:29.385529 | orchestrator | 2026-04-05 01:17:29 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:29.385651 | orchestrator | 2026-04-05 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:32.433384 | orchestrator | 2026-04-05 01:17:32 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:32.433466 | orchestrator | 2026-04-05 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:35.472592 | orchestrator | 2026-04-05 01:17:35 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:35.472763 | orchestrator | 2026-04-05 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:38.520177 | orchestrator | 2026-04-05 01:17:38 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:38.520276 | orchestrator | 2026-04-05 01:17:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:41.564382 | orchestrator | 2026-04-05 01:17:41 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:41.564481 | orchestrator | 2026-04-05 01:17:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:44.611256 | orchestrator | 2026-04-05 01:17:44 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:44.611356 | orchestrator | 2026-04-05 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:47.648532 | orchestrator | 2026-04-05 01:17:47 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:47.648655 | orchestrator | 2026-04-05 01:17:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:50.693022 | orchestrator | 2026-04-05 01:17:50 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:50.693151 | orchestrator | 2026-04-05 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:53.740445 | orchestrator | 2026-04-05 01:17:53 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:53.740563 | orchestrator | 2026-04-05 01:17:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:56.789424 | orchestrator | 2026-04-05 01:17:56 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:56.790500 | orchestrator | 2026-04-05 01:17:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:59.839613 | orchestrator | 2026-04-05 01:17:59 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:17:59.839800 | orchestrator | 2026-04-05 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:02.888837 | orchestrator | 2026-04-05 01:18:02 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:02.888921 | orchestrator | 2026-04-05 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:05.939478 | orchestrator | 2026-04-05 01:18:05 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:05.939582 | orchestrator | 2026-04-05 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:08.989199 | orchestrator | 2026-04-05 01:18:08 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:08.989323 | orchestrator | 2026-04-05 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:12.045161 | orchestrator | 2026-04-05 01:18:12 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:12.046401 | orchestrator | 2026-04-05 01:18:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:15.094362 | orchestrator | 2026-04-05 01:18:15 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:15.094469 | orchestrator | 2026-04-05 01:18:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:18.143914 | orchestrator | 2026-04-05 01:18:18 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:18.144000 | orchestrator | 2026-04-05 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:21.191910 | orchestrator | 2026-04-05 01:18:21 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:21.192027 | orchestrator | 2026-04-05 01:18:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:24.248135 | orchestrator | 2026-04-05 01:18:24 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:24.248222 | orchestrator | 2026-04-05 01:18:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:27.298544 | orchestrator | 2026-04-05 01:18:27 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:27.298663 | orchestrator | 2026-04-05 01:18:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:30.361681 | orchestrator | 2026-04-05 01:18:30 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:30.361805 | orchestrator | 2026-04-05 01:18:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:33.411167 | orchestrator | 2026-04-05 01:18:33 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:33.411262 | orchestrator | 2026-04-05 01:18:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:36.454358 | orchestrator | 2026-04-05 01:18:36 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:36.454449 | orchestrator | 2026-04-05 01:18:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:39.501973 | orchestrator | 2026-04-05 01:18:39 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:39.502078 | orchestrator | 2026-04-05 01:18:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:42.555272 | orchestrator | 2026-04-05 01:18:42 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:42.555394 | orchestrator | 2026-04-05 01:18:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:45.603461 | orchestrator | 2026-04-05 01:18:45 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state STARTED 2026-04-05 01:18:45.603548 | orchestrator | 2026-04-05 01:18:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:48.645564 | orchestrator | 2026-04-05 01:18:48 | INFO  | Task edc42a7b-34ed-44b2-9a20-9c240bd4126a is in state SUCCESS 2026-04-05 01:18:48.647937 | orchestrator | 2026-04-05 01:18:48.647998 | orchestrator | 2026-04-05 01:18:48.648009 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:18:48.648018 | orchestrator | 2026-04-05 01:18:48.648026 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:18:48.648086 | orchestrator | Sunday 05 April 2026 01:13:35 +0000 (0:00:00.508) 0:00:00.508 ********** 2026-04-05 01:18:48.648096 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:18:48.648105 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:18:48.648133 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:18:48.648140 | orchestrator | 2026-04-05 01:18:48.648147 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:18:48.648154 | orchestrator | Sunday 05 April 2026 01:13:35 +0000 (0:00:00.443) 0:00:00.951 ********** 2026-04-05 01:18:48.648160 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-05 01:18:48.648168 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-05 01:18:48.648174 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-05 01:18:48.648181 | orchestrator | 2026-04-05 01:18:48.648188 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-05 01:18:48.648194 | orchestrator | 2026-04-05 01:18:48.648201 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:18:48.648208 | orchestrator | Sunday 05 April 2026 01:13:35 +0000 (0:00:00.472) 0:00:01.424 ********** 2026-04-05 01:18:48.648244 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:18:48.648254 | orchestrator | 2026-04-05 01:18:48.648260 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting services] ************** 2026-04-05 01:18:48.648267 | orchestrator | Sunday 05 April 2026 01:13:36 +0000 (0:00:00.940) 0:00:02.365 ********** 2026-04-05 01:18:48.648274 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-05 01:18:48.648281 | orchestrator | 2026-04-05 01:18:48.648288 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting endpoints] ************* 2026-04-05 01:18:48.648294 | orchestrator | Sunday 05 April 2026 01:13:41 +0000 (0:00:04.269) 0:00:06.635 ********** 2026-04-05 01:18:48.648301 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-05 01:18:48.648308 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-05 01:18:48.648314 | orchestrator | 2026-04-05 01:18:48.648331 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-05 01:18:48.648338 | orchestrator | Sunday 05 April 2026 01:13:48 +0000 (0:00:07.598) 0:00:14.234 ********** 2026-04-05 01:18:48.648345 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:18:48.648352 | orchestrator | 2026-04-05 01:18:48.648359 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-05 01:18:48.648366 | orchestrator | Sunday 05 April 2026 01:13:52 +0000 (0:00:03.594) 0:00:17.828 ********** 2026-04-05 01:18:48.648372 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-05 01:18:48.648379 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-05 01:18:48.648386 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:18:48.648393 | orchestrator | 2026-04-05 01:18:48.648399 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-05 01:18:48.648406 | orchestrator | Sunday 05 April 2026 01:14:01 +0000 (0:00:09.133) 0:00:26.962 ********** 2026-04-05 01:18:48.648412 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:18:48.648419 | orchestrator | 2026-04-05 01:18:48.648426 | orchestrator | TASK [service-ks-register : octavia | Granting/revoking user roles] ************ 2026-04-05 01:18:48.648432 | orchestrator | Sunday 05 April 2026 01:14:05 +0000 (0:00:03.737) 0:00:30.699 ********** 2026-04-05 01:18:48.648439 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-05 01:18:48.648445 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-05 01:18:48.648452 | orchestrator | 2026-04-05 01:18:48.648459 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-05 01:18:48.648465 | orchestrator | Sunday 05 April 2026 01:14:13 +0000 (0:00:08.119) 0:00:38.818 ********** 2026-04-05 01:18:48.648472 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-05 01:18:48.648478 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-05 01:18:48.648492 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-05 01:18:48.648499 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-05 01:18:48.648507 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-05 01:18:48.648515 | orchestrator | 2026-04-05 01:18:48.648523 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:18:48.648531 | orchestrator | Sunday 05 April 2026 01:14:31 +0000 (0:00:17.667) 0:00:56.486 ********** 2026-04-05 01:18:48.648539 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:18:48.648547 | orchestrator | 2026-04-05 01:18:48.648554 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-05 01:18:48.648562 | orchestrator | Sunday 05 April 2026 01:14:31 +0000 (0:00:00.792) 0:00:57.278 ********** 2026-04-05 01:18:48.648570 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.648578 | orchestrator | 2026-04-05 01:18:48.648586 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-05 01:18:48.648594 | orchestrator | Sunday 05 April 2026 01:14:37 +0000 (0:00:06.116) 0:01:03.395 ********** 2026-04-05 01:18:48.648601 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.648609 | orchestrator | 2026-04-05 01:18:48.648617 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-05 01:18:48.648635 | orchestrator | Sunday 05 April 2026 01:14:43 +0000 (0:00:05.165) 0:01:08.560 ********** 2026-04-05 01:18:48.648643 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:18:48.648651 | orchestrator | 2026-04-05 01:18:48.648659 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-05 01:18:48.648667 | orchestrator | Sunday 05 April 2026 01:14:46 +0000 (0:00:03.454) 0:01:12.015 ********** 2026-04-05 01:18:48.648674 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-05 01:18:48.648682 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-05 01:18:48.648690 | orchestrator | 2026-04-05 01:18:48.648698 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-05 01:18:48.648706 | orchestrator | Sunday 05 April 2026 01:14:58 +0000 (0:00:11.620) 0:01:23.635 ********** 2026-04-05 01:18:48.648714 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-05 01:18:48.648722 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-05 01:18:48.648731 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-05 01:18:48.648740 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-05 01:18:48.649033 | orchestrator | 2026-04-05 01:18:48.649042 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-05 01:18:48.649049 | orchestrator | Sunday 05 April 2026 01:15:14 +0000 (0:00:16.609) 0:01:40.245 ********** 2026-04-05 01:18:48.649056 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.649063 | orchestrator | 2026-04-05 01:18:48.649070 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-05 01:18:48.649076 | orchestrator | Sunday 05 April 2026 01:15:19 +0000 (0:00:05.092) 0:01:45.338 ********** 2026-04-05 01:18:48.649083 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.649090 | orchestrator | 2026-04-05 01:18:48.649097 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-05 01:18:48.649103 | orchestrator | Sunday 05 April 2026 01:15:25 +0000 (0:00:05.954) 0:01:51.293 ********** 2026-04-05 01:18:48.649110 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:18:48.649117 | orchestrator | 2026-04-05 01:18:48.649129 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-05 01:18:48.649143 | orchestrator | Sunday 05 April 2026 01:15:26 +0000 (0:00:00.578) 0:01:51.871 ********** 2026-04-05 01:18:48.649150 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:18:48.649156 | orchestrator | 2026-04-05 01:18:48.649163 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:18:48.649170 | orchestrator | Sunday 05 April 2026 01:15:31 +0000 (0:00:05.266) 0:01:57.138 ********** 2026-04-05 01:18:48.649177 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:18:48.649184 | orchestrator | 2026-04-05 01:18:48.649190 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-05 01:18:48.649197 | orchestrator | Sunday 05 April 2026 01:15:32 +0000 (0:00:00.948) 0:01:58.087 ********** 2026-04-05 01:18:48.649203 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.649210 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.649217 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.649223 | orchestrator | 2026-04-05 01:18:48.649230 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-05 01:18:48.649237 | orchestrator | Sunday 05 April 2026 01:15:39 +0000 (0:00:06.451) 0:02:04.539 ********** 2026-04-05 01:18:48.649243 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.649250 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.649257 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.649263 | orchestrator | 2026-04-05 01:18:48.649270 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-05 01:18:48.649277 | orchestrator | Sunday 05 April 2026 01:15:44 +0000 (0:00:05.177) 0:02:09.716 ********** 2026-04-05 01:18:48.649284 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.649290 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.649297 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.649304 | orchestrator | 2026-04-05 01:18:48.649310 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-05 01:18:48.649317 | orchestrator | Sunday 05 April 2026 01:15:45 +0000 (0:00:00.875) 0:02:10.591 ********** 2026-04-05 01:18:48.649324 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:18:48.649330 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:18:48.649337 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:18:48.649344 | orchestrator | 2026-04-05 01:18:48.649350 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-05 01:18:48.649357 | orchestrator | Sunday 05 April 2026 01:15:48 +0000 (0:00:03.159) 0:02:13.751 ********** 2026-04-05 01:18:48.649364 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.649370 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.649377 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.649384 | orchestrator | 2026-04-05 01:18:48.649390 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-05 01:18:48.649397 | orchestrator | Sunday 05 April 2026 01:15:49 +0000 (0:00:01.393) 0:02:15.144 ********** 2026-04-05 01:18:48.649434 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.649443 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.649450 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.649456 | orchestrator | 2026-04-05 01:18:48.649463 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-05 01:18:48.649470 | orchestrator | Sunday 05 April 2026 01:15:50 +0000 (0:00:01.269) 0:02:16.414 ********** 2026-04-05 01:18:48.649476 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.649483 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.649490 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.649496 | orchestrator | 2026-04-05 01:18:48.649510 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-05 01:18:48.649517 | orchestrator | Sunday 05 April 2026 01:15:53 +0000 (0:00:02.331) 0:02:18.745 ********** 2026-04-05 01:18:48.649523 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.649530 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.649542 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.649548 | orchestrator | 2026-04-05 01:18:48.649555 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-05 01:18:48.649561 | orchestrator | Sunday 05 April 2026 01:15:55 +0000 (0:00:01.786) 0:02:20.531 ********** 2026-04-05 01:18:48.649568 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:18:48.649575 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:18:48.649581 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:18:48.649588 | orchestrator | 2026-04-05 01:18:48.649594 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-05 01:18:48.649601 | orchestrator | Sunday 05 April 2026 01:15:55 +0000 (0:00:00.661) 0:02:21.193 ********** 2026-04-05 01:18:48.649608 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:18:48.649614 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:18:48.649621 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:18:48.649995 | orchestrator | 2026-04-05 01:18:48.650010 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:18:48.650046 | orchestrator | Sunday 05 April 2026 01:15:58 +0000 (0:00:02.675) 0:02:23.868 ********** 2026-04-05 01:18:48.650053 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:18:48.650060 | orchestrator | 2026-04-05 01:18:48.650067 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-05 01:18:48.650073 | orchestrator | Sunday 05 April 2026 01:15:59 +0000 (0:00:00.718) 0:02:24.587 ********** 2026-04-05 01:18:48.650080 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:18:48.650087 | orchestrator | 2026-04-05 01:18:48.650094 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-05 01:18:48.650101 | orchestrator | Sunday 05 April 2026 01:16:03 +0000 (0:00:04.332) 0:02:28.919 ********** 2026-04-05 01:18:48.650107 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:18:48.650114 | orchestrator | 2026-04-05 01:18:48.650121 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-05 01:18:48.650127 | orchestrator | Sunday 05 April 2026 01:16:07 +0000 (0:00:03.636) 0:02:32.556 ********** 2026-04-05 01:18:48.650134 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-05 01:18:48.650147 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-05 01:18:48.650154 | orchestrator | 2026-04-05 01:18:48.650160 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-05 01:18:48.650167 | orchestrator | Sunday 05 April 2026 01:16:14 +0000 (0:00:07.749) 0:02:40.305 ********** 2026-04-05 01:18:48.650174 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:18:48.650180 | orchestrator | 2026-04-05 01:18:48.650187 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-05 01:18:48.650194 | orchestrator | Sunday 05 April 2026 01:16:18 +0000 (0:00:03.851) 0:02:44.156 ********** 2026-04-05 01:18:48.650201 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:18:48.650207 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:18:48.650214 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:18:48.650220 | orchestrator | 2026-04-05 01:18:48.650227 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-05 01:18:48.650234 | orchestrator | Sunday 05 April 2026 01:16:19 +0000 (0:00:00.325) 0:02:44.482 ********** 2026-04-05 01:18:48.650244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.650292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.650301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.650309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.650320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.650327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.650336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.650349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.650378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.650387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.650399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.650406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.650413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.650426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.650451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.650459 | orchestrator | 2026-04-05 01:18:48.650466 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-05 01:18:48.650473 | orchestrator | Sunday 05 April 2026 01:16:21 +0000 (0:00:02.828) 0:02:47.311 ********** 2026-04-05 01:18:48.650496 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:18:48.650503 | orchestrator | 2026-04-05 01:18:48.650510 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-05 01:18:48.650517 | orchestrator | Sunday 05 April 2026 01:16:22 +0000 (0:00:00.145) 0:02:47.457 ********** 2026-04-05 01:18:48.650524 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:18:48.650530 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:18:48.650537 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:18:48.650544 | orchestrator | 2026-04-05 01:18:48.650551 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-05 01:18:48.650557 | orchestrator | Sunday 05 April 2026 01:16:22 +0000 (0:00:00.319) 0:02:47.777 ********** 2026-04-05 01:18:48.650565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.650576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.650584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.650601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.650608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.650615 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:18:48.650644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.650653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.650668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.650681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.650689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.650696 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:18:48.650724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.650732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.650740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.650813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.650827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.650834 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:18:48.650841 | orchestrator | 2026-04-05 01:18:48.650848 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:18:48.650855 | orchestrator | Sunday 05 April 2026 01:16:23 +0000 (0:00:00.685) 0:02:48.462 ********** 2026-04-05 01:18:48.650862 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:18:48.650868 | orchestrator | 2026-04-05 01:18:48.650875 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-05 01:18:48.650882 | orchestrator | Sunday 05 April 2026 01:16:23 +0000 (0:00:00.731) 0:02:49.193 ********** 2026-04-05 01:18:48.650889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.650920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.650932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.650944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.650952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.650958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.650965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.650992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651080 | orchestrator | 2026-04-05 01:18:48.651086 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-05 01:18:48.651093 | orchestrator | Sunday 05 April 2026 01:16:29 +0000 (0:00:05.373) 0:02:54.567 ********** 2026-04-05 01:18:48.651104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.651117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.651124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.651162 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:18:48.651169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.651180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.651190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.651209 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:18:48.651219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.651226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.651237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.651259 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:18:48.651266 | orchestrator | 2026-04-05 01:18:48.651272 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-05 01:18:48.651278 | orchestrator | Sunday 05 April 2026 01:16:31 +0000 (0:00:01.931) 0:02:56.499 ********** 2026-04-05 01:18:48.651285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.651297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.651308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.651333 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:18:48.651340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.651346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.651358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.651382 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:18:48.651392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.651399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.651405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.651428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.651434 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:18:48.651441 | orchestrator | 2026-04-05 01:18:48.651447 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-05 01:18:48.651453 | orchestrator | Sunday 05 April 2026 01:16:33 +0000 (0:00:02.097) 0:02:58.597 ********** 2026-04-05 01:18:48.651463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.651469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.651476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.651491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.651498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.651505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.651514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651585 | orchestrator | 2026-04-05 01:18:48.651592 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-05 01:18:48.651598 | orchestrator | Sunday 05 April 2026 01:16:39 +0000 (0:00:06.079) 0:03:04.676 ********** 2026-04-05 01:18:48.651604 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 01:18:48.651611 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 01:18:48.651617 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 01:18:48.651628 | orchestrator | 2026-04-05 01:18:48.651634 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-05 01:18:48.651640 | orchestrator | Sunday 05 April 2026 01:16:42 +0000 (0:00:02.853) 0:03:07.529 ********** 2026-04-05 01:18:48.651651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.651657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.651667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.651674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.651681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.651692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.651701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.651795 | orchestrator | 2026-04-05 01:18:48.651801 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-05 01:18:48.651808 | orchestrator | Sunday 05 April 2026 01:16:58 +0000 (0:00:16.484) 0:03:24.014 ********** 2026-04-05 01:18:48.651814 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.651821 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.651827 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.651833 | orchestrator | 2026-04-05 01:18:48.651840 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-05 01:18:48.651846 | orchestrator | Sunday 05 April 2026 01:17:00 +0000 (0:00:02.102) 0:03:26.117 ********** 2026-04-05 01:18:48.651852 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 01:18:48.651862 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 01:18:48.651868 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 01:18:48.651875 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 01:18:48.651881 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 01:18:48.651887 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 01:18:48.651894 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 01:18:48.651900 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 01:18:48.651906 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 01:18:48.651919 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 01:18:48.651925 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 01:18:48.651931 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 01:18:48.651937 | orchestrator | 2026-04-05 01:18:48.651943 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-05 01:18:48.651950 | orchestrator | Sunday 05 April 2026 01:17:06 +0000 (0:00:05.351) 0:03:31.469 ********** 2026-04-05 01:18:48.651956 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 01:18:48.651962 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 01:18:48.651968 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 01:18:48.651975 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 01:18:48.651981 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 01:18:48.651987 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 01:18:48.651993 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 01:18:48.651999 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 01:18:48.652005 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 01:18:48.652011 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 01:18:48.652018 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 01:18:48.652024 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 01:18:48.652030 | orchestrator | 2026-04-05 01:18:48.652036 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-05 01:18:48.652042 | orchestrator | Sunday 05 April 2026 01:17:11 +0000 (0:00:05.385) 0:03:36.854 ********** 2026-04-05 01:18:48.652048 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 01:18:48.652055 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 01:18:48.652061 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 01:18:48.652067 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 01:18:48.652073 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 01:18:48.652079 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 01:18:48.652086 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 01:18:48.652092 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 01:18:48.652101 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 01:18:48.652108 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 01:18:48.652114 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 01:18:48.652120 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 01:18:48.652127 | orchestrator | 2026-04-05 01:18:48.652133 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-04-05 01:18:48.652139 | orchestrator | Sunday 05 April 2026 01:17:17 +0000 (0:00:05.581) 0:03:42.435 ********** 2026-04-05 01:18:48.652146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.652160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.652167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:18:48.652174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.652183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.652190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:18:48.652197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.652212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.652219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.652226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.652232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.652242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:18:48.652249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.652259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.652269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:18:48.652276 | orchestrator | 2026-04-05 01:18:48.652283 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-04-05 01:18:48.652289 | orchestrator | Sunday 05 April 2026 01:17:21 +0000 (0:00:04.437) 0:03:46.873 ********** 2026-04-05 01:18:48.652295 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:18:48.652301 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:18:48.652308 | orchestrator | } 2026-04-05 01:18:48.652314 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:18:48.652321 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:18:48.652327 | orchestrator | } 2026-04-05 01:18:48.652333 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:18:48.652339 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:18:48.652346 | orchestrator | } 2026-04-05 01:18:48.652352 | orchestrator | 2026-04-05 01:18:48.652358 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:18:48.652364 | orchestrator | Sunday 05 April 2026 01:17:21 +0000 (0:00:00.557) 0:03:47.431 ********** 2026-04-05 01:18:48.652371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.652382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.652389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.652400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.652410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.652417 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:18:48.652423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.652430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.652440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.652450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.652457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.652463 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:18:48.652474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:18:48.652480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:18:48.652487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.652497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:18:48.652508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:18:48.652514 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:18:48.652520 | orchestrator | 2026-04-05 01:18:48.652527 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:18:48.652533 | orchestrator | Sunday 05 April 2026 01:17:22 +0000 (0:00:00.985) 0:03:48.416 ********** 2026-04-05 01:18:48.652539 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:18:48.652546 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:18:48.652552 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:18:48.652558 | orchestrator | 2026-04-05 01:18:48.652564 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-05 01:18:48.652570 | orchestrator | Sunday 05 April 2026 01:17:23 +0000 (0:00:00.299) 0:03:48.716 ********** 2026-04-05 01:18:48.652576 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.652583 | orchestrator | 2026-04-05 01:18:48.652589 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-05 01:18:48.652595 | orchestrator | Sunday 05 April 2026 01:17:25 +0000 (0:00:02.312) 0:03:51.029 ********** 2026-04-05 01:18:48.652601 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.652608 | orchestrator | 2026-04-05 01:18:48.652614 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-05 01:18:48.652620 | orchestrator | Sunday 05 April 2026 01:17:27 +0000 (0:00:02.350) 0:03:53.380 ********** 2026-04-05 01:18:48.652626 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.652633 | orchestrator | 2026-04-05 01:18:48.652639 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-05 01:18:48.652648 | orchestrator | Sunday 05 April 2026 01:17:30 +0000 (0:00:02.896) 0:03:56.277 ********** 2026-04-05 01:18:48.652655 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.652661 | orchestrator | 2026-04-05 01:18:48.652668 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-05 01:18:48.652674 | orchestrator | Sunday 05 April 2026 01:17:33 +0000 (0:00:02.438) 0:03:58.715 ********** 2026-04-05 01:18:48.652680 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.652686 | orchestrator | 2026-04-05 01:18:48.652692 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 01:18:48.652699 | orchestrator | Sunday 05 April 2026 01:17:55 +0000 (0:00:22.337) 0:04:21.052 ********** 2026-04-05 01:18:48.652705 | orchestrator | 2026-04-05 01:18:48.652711 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 01:18:48.652718 | orchestrator | Sunday 05 April 2026 01:17:55 +0000 (0:00:00.072) 0:04:21.125 ********** 2026-04-05 01:18:48.652724 | orchestrator | 2026-04-05 01:18:48.652730 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 01:18:48.652736 | orchestrator | Sunday 05 April 2026 01:17:55 +0000 (0:00:00.079) 0:04:21.204 ********** 2026-04-05 01:18:48.652756 | orchestrator | 2026-04-05 01:18:48.652763 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-05 01:18:48.652769 | orchestrator | Sunday 05 April 2026 01:17:55 +0000 (0:00:00.070) 0:04:21.275 ********** 2026-04-05 01:18:48.652775 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.652781 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.652788 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.652799 | orchestrator | 2026-04-05 01:18:48.652805 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-05 01:18:48.652811 | orchestrator | Sunday 05 April 2026 01:18:12 +0000 (0:00:17.052) 0:04:38.327 ********** 2026-04-05 01:18:48.652818 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.652824 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.652830 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.652837 | orchestrator | 2026-04-05 01:18:48.652843 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-05 01:18:48.652849 | orchestrator | Sunday 05 April 2026 01:18:20 +0000 (0:00:07.276) 0:04:45.604 ********** 2026-04-05 01:18:48.652855 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.652862 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.652868 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.652875 | orchestrator | 2026-04-05 01:18:48.652881 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-05 01:18:48.652887 | orchestrator | Sunday 05 April 2026 01:18:28 +0000 (0:00:08.429) 0:04:54.034 ********** 2026-04-05 01:18:48.652893 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.652900 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.652906 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.652912 | orchestrator | 2026-04-05 01:18:48.652919 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-05 01:18:48.652925 | orchestrator | Sunday 05 April 2026 01:18:39 +0000 (0:00:10.584) 0:05:04.618 ********** 2026-04-05 01:18:48.652931 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:18:48.652937 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:18:48.652944 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:18:48.652950 | orchestrator | 2026-04-05 01:18:48.652956 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:18:48.652963 | orchestrator | testbed-node-0 : ok=58  changed=39  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-05 01:18:48.652973 | orchestrator | testbed-node-1 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 01:18:48.652979 | orchestrator | testbed-node-2 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 01:18:48.652986 | orchestrator | 2026-04-05 01:18:48.652992 | orchestrator | 2026-04-05 01:18:48.652998 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:18:48.653004 | orchestrator | Sunday 05 April 2026 01:18:45 +0000 (0:00:06.048) 0:05:10.667 ********** 2026-04-05 01:18:48.653010 | orchestrator | =============================================================================== 2026-04-05 01:18:48.653016 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.34s 2026-04-05 01:18:48.653023 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.67s 2026-04-05 01:18:48.653029 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.05s 2026-04-05 01:18:48.653035 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.61s 2026-04-05 01:18:48.653041 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.48s 2026-04-05 01:18:48.653047 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.62s 2026-04-05 01:18:48.653054 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.58s 2026-04-05 01:18:48.653060 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.13s 2026-04-05 01:18:48.653066 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.43s 2026-04-05 01:18:48.653072 | orchestrator | service-ks-register : octavia | Granting/revoking user roles ------------ 8.12s 2026-04-05 01:18:48.653078 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.75s 2026-04-05 01:18:48.653091 | orchestrator | service-ks-register : octavia | Creating/deleting endpoints ------------- 7.60s 2026-04-05 01:18:48.653101 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.28s 2026-04-05 01:18:48.653111 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.45s 2026-04-05 01:18:48.653129 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 6.12s 2026-04-05 01:18:48.653139 | orchestrator | octavia : Copying over config.json files for services ------------------- 6.08s 2026-04-05 01:18:48.653149 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.05s 2026-04-05 01:18:48.653158 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.95s 2026-04-05 01:18:48.653167 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.58s 2026-04-05 01:18:48.653177 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.39s 2026-04-05 01:18:48.653186 | orchestrator | 2026-04-05 01:18:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:18:51.696966 | orchestrator | 2026-04-05 01:18:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:18:54.738292 | orchestrator | 2026-04-05 01:18:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:18:57.786321 | orchestrator | 2026-04-05 01:18:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:00.833003 | orchestrator | 2026-04-05 01:19:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:03.884823 | orchestrator | 2026-04-05 01:19:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:06.932366 | orchestrator | 2026-04-05 01:19:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:09.980343 | orchestrator | 2026-04-05 01:19:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:13.042578 | orchestrator | 2026-04-05 01:19:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:16.089014 | orchestrator | 2026-04-05 01:19:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:19.132850 | orchestrator | 2026-04-05 01:19:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:22.177931 | orchestrator | 2026-04-05 01:19:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:25.227338 | orchestrator | 2026-04-05 01:19:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:28.274573 | orchestrator | 2026-04-05 01:19:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:31.323028 | orchestrator | 2026-04-05 01:19:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:34.359860 | orchestrator | 2026-04-05 01:19:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:37.406281 | orchestrator | 2026-04-05 01:19:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:40.454727 | orchestrator | 2026-04-05 01:19:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:43.502781 | orchestrator | 2026-04-05 01:19:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:46.552246 | orchestrator | 2026-04-05 01:19:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:19:49.603379 | orchestrator | 2026-04-05 01:19:49.811497 | orchestrator | 2026-04-05 01:19:49.816131 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Apr 5 01:19:49 UTC 2026 2026-04-05 01:19:49.816172 | orchestrator | 2026-04-05 01:19:50.296120 | orchestrator | ok: Runtime: 0:36:27.339377 2026-04-05 01:19:50.571415 | 2026-04-05 01:19:50.571561 | TASK [Bootstrap services] 2026-04-05 01:19:51.360012 | orchestrator | 2026-04-05 01:19:51.360168 | orchestrator | # BOOTSTRAP 2026-04-05 01:19:51.360186 | orchestrator | 2026-04-05 01:19:51.360195 | orchestrator | + set -e 2026-04-05 01:19:51.360203 | orchestrator | + echo 2026-04-05 01:19:51.360212 | orchestrator | + echo '# BOOTSTRAP' 2026-04-05 01:19:51.360221 | orchestrator | + echo 2026-04-05 01:19:51.360250 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-05 01:19:51.371203 | orchestrator | + set -e 2026-04-05 01:19:51.371312 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-05 01:19:57.075156 | orchestrator | 2026-04-05 01:19:57 | INFO  | It takes a moment until task eac6ed8d-ad7a-471d-b6ac-c07a0fe64a2f (flavor-manager) has been started and output is visible here. 2026-04-05 01:20:08.114537 | orchestrator | 2026-04-05 01:20:02 | INFO  | Flavor SCS-1L-1 created 2026-04-05 01:20:08.115336 | orchestrator | 2026-04-05 01:20:03 | INFO  | Flavor SCS-1L-1-5 created 2026-04-05 01:20:08.115354 | orchestrator | 2026-04-05 01:20:03 | INFO  | Flavor SCS-1V-2 created 2026-04-05 01:20:08.115360 | orchestrator | 2026-04-05 01:20:03 | INFO  | Flavor SCS-1V-2-5 created 2026-04-05 01:20:08.115365 | orchestrator | 2026-04-05 01:20:03 | INFO  | Flavor SCS-1V-4 created 2026-04-05 01:20:08.115370 | orchestrator | 2026-04-05 01:20:03 | INFO  | Flavor SCS-1V-4-10 created 2026-04-05 01:20:08.115375 | orchestrator | 2026-04-05 01:20:04 | INFO  | Flavor SCS-1V-8 created 2026-04-05 01:20:08.115380 | orchestrator | 2026-04-05 01:20:04 | INFO  | Flavor SCS-1V-8-20 created 2026-04-05 01:20:08.115391 | orchestrator | 2026-04-05 01:20:04 | INFO  | Flavor SCS-2V-4 created 2026-04-05 01:20:08.115396 | orchestrator | 2026-04-05 01:20:04 | INFO  | Flavor SCS-2V-4-10 created 2026-04-05 01:20:08.115400 | orchestrator | 2026-04-05 01:20:04 | INFO  | Flavor SCS-2V-8 created 2026-04-05 01:20:08.115405 | orchestrator | 2026-04-05 01:20:04 | INFO  | Flavor SCS-2V-8-20 created 2026-04-05 01:20:08.115409 | orchestrator | 2026-04-05 01:20:05 | INFO  | Flavor SCS-2V-16 created 2026-04-05 01:20:08.115413 | orchestrator | 2026-04-05 01:20:05 | INFO  | Flavor SCS-2V-16-50 created 2026-04-05 01:20:08.115418 | orchestrator | 2026-04-05 01:20:05 | INFO  | Flavor SCS-4V-8 created 2026-04-05 01:20:08.115423 | orchestrator | 2026-04-05 01:20:05 | INFO  | Flavor SCS-4V-8-20 created 2026-04-05 01:20:08.115427 | orchestrator | 2026-04-05 01:20:05 | INFO  | Flavor SCS-4V-16 created 2026-04-05 01:20:08.115431 | orchestrator | 2026-04-05 01:20:05 | INFO  | Flavor SCS-4V-16-50 created 2026-04-05 01:20:08.115436 | orchestrator | 2026-04-05 01:20:05 | INFO  | Flavor SCS-4V-32 created 2026-04-05 01:20:08.115441 | orchestrator | 2026-04-05 01:20:06 | INFO  | Flavor SCS-4V-32-100 created 2026-04-05 01:20:08.115445 | orchestrator | 2026-04-05 01:20:06 | INFO  | Flavor SCS-8V-16 created 2026-04-05 01:20:08.115450 | orchestrator | 2026-04-05 01:20:06 | INFO  | Flavor SCS-8V-16-50 created 2026-04-05 01:20:08.115454 | orchestrator | 2026-04-05 01:20:06 | INFO  | Flavor SCS-8V-32 created 2026-04-05 01:20:08.115459 | orchestrator | 2026-04-05 01:20:06 | INFO  | Flavor SCS-8V-32-100 created 2026-04-05 01:20:08.115464 | orchestrator | 2026-04-05 01:20:06 | INFO  | Flavor SCS-16V-32 created 2026-04-05 01:20:08.115468 | orchestrator | 2026-04-05 01:20:07 | INFO  | Flavor SCS-16V-32-100 created 2026-04-05 01:20:08.115473 | orchestrator | 2026-04-05 01:20:07 | INFO  | Flavor SCS-2V-4-20s created 2026-04-05 01:20:08.115477 | orchestrator | 2026-04-05 01:20:07 | INFO  | Flavor SCS-4V-8-50s created 2026-04-05 01:20:08.115482 | orchestrator | 2026-04-05 01:20:07 | INFO  | Flavor SCS-4V-16-100s created 2026-04-05 01:20:08.115486 | orchestrator | 2026-04-05 01:20:07 | INFO  | Flavor SCS-8V-32-100s created 2026-04-05 01:20:09.731477 | orchestrator | 2026-04-05 01:20:09 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-05 01:20:19.853564 | orchestrator | 2026-04-05 01:20:19 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-05 01:20:19.945979 | orchestrator | 2026-04-05 01:20:19 | INFO  | Task f7f81c0e-9e03-4c7b-bedf-d206c9de2496 (bootstrap-basic) was prepared for execution. 2026-04-05 01:20:19.946117 | orchestrator | 2026-04-05 01:20:19 | INFO  | It takes a moment until task f7f81c0e-9e03-4c7b-bedf-d206c9de2496 (bootstrap-basic) has been started and output is visible here. 2026-04-05 01:21:10.084926 | orchestrator | 2026-04-05 01:21:10.085009 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-05 01:21:10.085017 | orchestrator | 2026-04-05 01:21:10.085021 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 01:21:10.085027 | orchestrator | Sunday 05 April 2026 01:20:23 +0000 (0:00:00.114) 0:00:00.114 ********** 2026-04-05 01:21:10.085034 | orchestrator | ok: [localhost] 2026-04-05 01:21:10.085045 | orchestrator | 2026-04-05 01:21:10.085055 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-05 01:21:10.085062 | orchestrator | Sunday 05 April 2026 01:20:25 +0000 (0:00:02.095) 0:00:02.209 ********** 2026-04-05 01:21:10.085071 | orchestrator | ok: [localhost] 2026-04-05 01:21:10.085078 | orchestrator | 2026-04-05 01:21:10.085085 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-05 01:21:10.085092 | orchestrator | Sunday 05 April 2026 01:20:36 +0000 (0:00:11.340) 0:00:13.550 ********** 2026-04-05 01:21:10.085099 | orchestrator | changed: [localhost] 2026-04-05 01:21:10.085106 | orchestrator | 2026-04-05 01:21:10.085112 | orchestrator | TASK [Create public network] *************************************************** 2026-04-05 01:21:10.085119 | orchestrator | Sunday 05 April 2026 01:20:44 +0000 (0:00:07.919) 0:00:21.469 ********** 2026-04-05 01:21:10.085125 | orchestrator | changed: [localhost] 2026-04-05 01:21:10.085132 | orchestrator | 2026-04-05 01:21:10.085142 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-05 01:21:10.085149 | orchestrator | Sunday 05 April 2026 01:20:50 +0000 (0:00:05.361) 0:00:26.831 ********** 2026-04-05 01:21:10.085157 | orchestrator | changed: [localhost] 2026-04-05 01:21:10.085163 | orchestrator | 2026-04-05 01:21:10.085170 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-05 01:21:10.085178 | orchestrator | Sunday 05 April 2026 01:20:57 +0000 (0:00:06.961) 0:00:33.793 ********** 2026-04-05 01:21:10.085185 | orchestrator | changed: [localhost] 2026-04-05 01:21:10.085192 | orchestrator | 2026-04-05 01:21:10.085199 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-05 01:21:10.085207 | orchestrator | Sunday 05 April 2026 01:21:01 +0000 (0:00:04.714) 0:00:38.507 ********** 2026-04-05 01:21:10.085214 | orchestrator | changed: [localhost] 2026-04-05 01:21:10.085222 | orchestrator | 2026-04-05 01:21:10.085230 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-05 01:21:10.085243 | orchestrator | Sunday 05 April 2026 01:21:06 +0000 (0:00:04.148) 0:00:42.655 ********** 2026-04-05 01:21:10.085247 | orchestrator | ok: [localhost] 2026-04-05 01:21:10.085251 | orchestrator | 2026-04-05 01:21:10.085256 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:21:10.085260 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:21:10.085266 | orchestrator | 2026-04-05 01:21:10.085270 | orchestrator | 2026-04-05 01:21:10.085276 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:21:10.085282 | orchestrator | Sunday 05 April 2026 01:21:09 +0000 (0:00:03.866) 0:00:46.521 ********** 2026-04-05 01:21:10.085289 | orchestrator | =============================================================================== 2026-04-05 01:21:10.085295 | orchestrator | Get volume type LUKS --------------------------------------------------- 11.34s 2026-04-05 01:21:10.085323 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.92s 2026-04-05 01:21:10.085331 | orchestrator | Set public network to default ------------------------------------------- 6.96s 2026-04-05 01:21:10.085337 | orchestrator | Create public network --------------------------------------------------- 5.36s 2026-04-05 01:21:10.085345 | orchestrator | Create public subnet ---------------------------------------------------- 4.71s 2026-04-05 01:21:10.085351 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.15s 2026-04-05 01:21:10.085358 | orchestrator | Create manager role ----------------------------------------------------- 3.87s 2026-04-05 01:21:10.085366 | orchestrator | Gathering Facts --------------------------------------------------------- 2.10s 2026-04-05 01:21:12.170608 | orchestrator | 2026-04-05 01:21:12 | INFO  | It takes a moment until task 1cfa4691-2944-4d84-9c94-6db92b74ad23 (image-manager) has been started and output is visible here. 2026-04-05 01:21:51.497229 | orchestrator | 2026-04-05 01:21:15 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-05 01:21:51.497336 | orchestrator | 2026-04-05 01:21:15 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-05 01:21:51.497350 | orchestrator | 2026-04-05 01:21:15 | INFO  | Importing image Cirros 0.6.2 2026-04-05 01:21:51.497360 | orchestrator | 2026-04-05 01:21:15 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-05 01:21:51.497371 | orchestrator | 2026-04-05 01:21:17 | INFO  | Waiting for image to leave queued state... 2026-04-05 01:21:51.497383 | orchestrator | 2026-04-05 01:21:19 | INFO  | Waiting for import to complete... 2026-04-05 01:21:51.497392 | orchestrator | 2026-04-05 01:21:29 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-05 01:21:51.497401 | orchestrator | 2026-04-05 01:21:30 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-05 01:21:51.497409 | orchestrator | 2026-04-05 01:21:30 | INFO  | Setting internal_version = 0.6.2 2026-04-05 01:21:51.497418 | orchestrator | 2026-04-05 01:21:30 | INFO  | Setting image_original_user = cirros 2026-04-05 01:21:51.497427 | orchestrator | 2026-04-05 01:21:30 | INFO  | Adding tag os:cirros 2026-04-05 01:21:51.497436 | orchestrator | 2026-04-05 01:21:30 | INFO  | Setting property architecture: x86_64 2026-04-05 01:21:51.497444 | orchestrator | 2026-04-05 01:21:30 | INFO  | Setting property hw_disk_bus: scsi 2026-04-05 01:21:51.497453 | orchestrator | 2026-04-05 01:21:30 | INFO  | Setting property hw_rng_model: virtio 2026-04-05 01:21:51.497462 | orchestrator | 2026-04-05 01:21:30 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-05 01:21:51.497470 | orchestrator | 2026-04-05 01:21:31 | INFO  | Setting property hw_watchdog_action: reset 2026-04-05 01:21:51.497479 | orchestrator | 2026-04-05 01:21:31 | INFO  | Setting property hypervisor_type: qemu 2026-04-05 01:21:51.497495 | orchestrator | 2026-04-05 01:21:31 | INFO  | Setting property os_distro: cirros 2026-04-05 01:21:51.497504 | orchestrator | 2026-04-05 01:21:31 | INFO  | Setting property os_purpose: minimal 2026-04-05 01:21:51.497514 | orchestrator | 2026-04-05 01:21:31 | INFO  | Setting property replace_frequency: never 2026-04-05 01:21:51.497523 | orchestrator | 2026-04-05 01:21:32 | INFO  | Setting property uuid_validity: none 2026-04-05 01:21:51.497533 | orchestrator | 2026-04-05 01:21:32 | INFO  | Setting property provided_until: none 2026-04-05 01:21:51.497543 | orchestrator | 2026-04-05 01:21:32 | INFO  | Setting property image_description: Cirros 2026-04-05 01:21:51.497552 | orchestrator | 2026-04-05 01:21:32 | INFO  | Setting property image_name: Cirros 2026-04-05 01:21:51.497584 | orchestrator | 2026-04-05 01:21:32 | INFO  | Setting property internal_version: 0.6.2 2026-04-05 01:21:51.497595 | orchestrator | 2026-04-05 01:21:33 | INFO  | Setting property image_original_user: cirros 2026-04-05 01:21:51.497604 | orchestrator | 2026-04-05 01:21:33 | INFO  | Setting property os_version: 0.6.2 2026-04-05 01:21:51.497614 | orchestrator | 2026-04-05 01:21:33 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-05 01:21:51.497623 | orchestrator | 2026-04-05 01:21:33 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-05 01:21:51.497631 | orchestrator | 2026-04-05 01:21:33 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-05 01:21:51.497639 | orchestrator | 2026-04-05 01:21:33 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-05 01:21:51.497650 | orchestrator | 2026-04-05 01:21:33 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-05 01:21:51.497658 | orchestrator | 2026-04-05 01:21:34 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-05 01:21:51.497666 | orchestrator | 2026-04-05 01:21:34 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-05 01:21:51.497675 | orchestrator | 2026-04-05 01:21:34 | INFO  | Importing image Cirros 0.6.3 2026-04-05 01:21:51.497682 | orchestrator | 2026-04-05 01:21:34 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-05 01:21:51.497690 | orchestrator | 2026-04-05 01:21:34 | INFO  | Waiting for image to leave queued state... 2026-04-05 01:21:51.497698 | orchestrator | 2026-04-05 01:21:36 | INFO  | Waiting for import to complete... 2026-04-05 01:21:51.497723 | orchestrator | 2026-04-05 01:21:46 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-05 01:21:51.497745 | orchestrator | 2026-04-05 01:21:47 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-05 01:21:51.497755 | orchestrator | 2026-04-05 01:21:47 | INFO  | Setting internal_version = 0.6.3 2026-04-05 01:21:51.497764 | orchestrator | 2026-04-05 01:21:47 | INFO  | Setting image_original_user = cirros 2026-04-05 01:21:51.497773 | orchestrator | 2026-04-05 01:21:47 | INFO  | Adding tag os:cirros 2026-04-05 01:21:51.497782 | orchestrator | 2026-04-05 01:21:47 | INFO  | Setting property architecture: x86_64 2026-04-05 01:21:51.497791 | orchestrator | 2026-04-05 01:21:47 | INFO  | Setting property hw_disk_bus: scsi 2026-04-05 01:21:51.497800 | orchestrator | 2026-04-05 01:21:47 | INFO  | Setting property hw_rng_model: virtio 2026-04-05 01:21:51.497810 | orchestrator | 2026-04-05 01:21:47 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-05 01:21:51.497819 | orchestrator | 2026-04-05 01:21:48 | INFO  | Setting property hw_watchdog_action: reset 2026-04-05 01:21:51.497829 | orchestrator | 2026-04-05 01:21:48 | INFO  | Setting property hypervisor_type: qemu 2026-04-05 01:21:51.497838 | orchestrator | 2026-04-05 01:21:48 | INFO  | Setting property os_distro: cirros 2026-04-05 01:21:51.497846 | orchestrator | 2026-04-05 01:21:48 | INFO  | Setting property os_purpose: minimal 2026-04-05 01:21:51.497924 | orchestrator | 2026-04-05 01:21:48 | INFO  | Setting property replace_frequency: never 2026-04-05 01:21:51.497935 | orchestrator | 2026-04-05 01:21:49 | INFO  | Setting property uuid_validity: none 2026-04-05 01:21:51.497945 | orchestrator | 2026-04-05 01:21:49 | INFO  | Setting property provided_until: none 2026-04-05 01:21:51.497954 | orchestrator | 2026-04-05 01:21:49 | INFO  | Setting property image_description: Cirros 2026-04-05 01:21:51.497970 | orchestrator | 2026-04-05 01:21:49 | INFO  | Setting property image_name: Cirros 2026-04-05 01:21:51.497975 | orchestrator | 2026-04-05 01:21:49 | INFO  | Setting property internal_version: 0.6.3 2026-04-05 01:21:51.497981 | orchestrator | 2026-04-05 01:21:50 | INFO  | Setting property image_original_user: cirros 2026-04-05 01:21:51.497986 | orchestrator | 2026-04-05 01:21:50 | INFO  | Setting property os_version: 0.6.3 2026-04-05 01:21:51.497991 | orchestrator | 2026-04-05 01:21:50 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-05 01:21:51.497997 | orchestrator | 2026-04-05 01:21:50 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-05 01:21:51.498002 | orchestrator | 2026-04-05 01:21:50 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-05 01:21:51.498007 | orchestrator | 2026-04-05 01:21:50 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-05 01:21:51.498012 | orchestrator | 2026-04-05 01:21:50 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-05 01:21:51.792646 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-04-05 01:21:53.842252 | orchestrator | 2026-04-05 01:21:53 | INFO  | date: 2026-04-04 2026-04-05 01:21:53.842600 | orchestrator | 2026-04-05 01:21:53 | INFO  | image: octavia-amphora-haproxy-2025.1.20260404.qcow2 2026-04-05 01:21:53.842660 | orchestrator | 2026-04-05 01:21:53 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260404.qcow2 2026-04-05 01:21:53.842727 | orchestrator | 2026-04-05 01:21:53 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260404.qcow2.CHECKSUM 2026-04-05 01:21:54.007146 | orchestrator | 2026-04-05 01:21:54 | INFO  | checksum: 576383816079a741c012aa9cc6bbd8c81330623d63a849bbfb1ce63abb0b7544 2026-04-05 01:21:54.106444 | orchestrator | 2026-04-05 01:21:54 | INFO  | It takes a moment until task 83edb3df-5309-4f7b-9b07-30ab3c9173b5 (image-manager) has been started and output is visible here. 2026-04-05 01:23:05.493712 | orchestrator | 2026-04-05 01:21:56 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-04' 2026-04-05 01:23:05.493840 | orchestrator | 2026-04-05 01:21:56 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260404.qcow2: 200 2026-04-05 01:23:05.493859 | orchestrator | 2026-04-05 01:21:56 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-04 2026-04-05 01:23:05.493883 | orchestrator | 2026-04-05 01:21:56 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260404.qcow2 2026-04-05 01:23:05.493954 | orchestrator | 2026-04-05 01:21:58 | INFO  | Waiting for image to leave queued state... 2026-04-05 01:23:05.493969 | orchestrator | 2026-04-05 01:22:00 | INFO  | Waiting for import to complete... 2026-04-05 01:23:05.493981 | orchestrator | 2026-04-05 01:22:10 | INFO  | Waiting for import to complete... 2026-04-05 01:23:05.493992 | orchestrator | 2026-04-05 01:22:20 | INFO  | Waiting for import to complete... 2026-04-05 01:23:05.494004 | orchestrator | 2026-04-05 01:22:30 | INFO  | Waiting for import to complete... 2026-04-05 01:23:05.494064 | orchestrator | 2026-04-05 01:22:40 | INFO  | Waiting for import to complete... 2026-04-05 01:23:05.494079 | orchestrator | 2026-04-05 01:22:50 | INFO  | Waiting for import to complete... 2026-04-05 01:23:05.494090 | orchestrator | 2026-04-05 01:23:00 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-04' successfully completed, reloading images 2026-04-05 01:23:05.494130 | orchestrator | 2026-04-05 01:23:01 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-04' 2026-04-05 01:23:05.494142 | orchestrator | 2026-04-05 01:23:01 | INFO  | Setting internal_version = 2026-04-04 2026-04-05 01:23:05.494153 | orchestrator | 2026-04-05 01:23:01 | INFO  | Setting image_original_user = ubuntu 2026-04-05 01:23:05.494165 | orchestrator | 2026-04-05 01:23:01 | INFO  | Adding tag amphora 2026-04-05 01:23:05.494176 | orchestrator | 2026-04-05 01:23:01 | INFO  | Adding tag os:ubuntu 2026-04-05 01:23:05.494187 | orchestrator | 2026-04-05 01:23:01 | INFO  | Setting property architecture: x86_64 2026-04-05 01:23:05.494198 | orchestrator | 2026-04-05 01:23:01 | INFO  | Setting property hw_disk_bus: scsi 2026-04-05 01:23:05.494209 | orchestrator | 2026-04-05 01:23:01 | INFO  | Setting property hw_rng_model: virtio 2026-04-05 01:23:05.494220 | orchestrator | 2026-04-05 01:23:02 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-05 01:23:05.494231 | orchestrator | 2026-04-05 01:23:02 | INFO  | Setting property hw_watchdog_action: reset 2026-04-05 01:23:05.494242 | orchestrator | 2026-04-05 01:23:02 | INFO  | Setting property hypervisor_type: qemu 2026-04-05 01:23:05.494256 | orchestrator | 2026-04-05 01:23:02 | INFO  | Setting property os_distro: ubuntu 2026-04-05 01:23:05.494269 | orchestrator | 2026-04-05 01:23:02 | INFO  | Setting property replace_frequency: quarterly 2026-04-05 01:23:05.494281 | orchestrator | 2026-04-05 01:23:03 | INFO  | Setting property uuid_validity: last-1 2026-04-05 01:23:05.494295 | orchestrator | 2026-04-05 01:23:03 | INFO  | Setting property provided_until: none 2026-04-05 01:23:05.494313 | orchestrator | 2026-04-05 01:23:03 | INFO  | Setting property os_purpose: network 2026-04-05 01:23:05.494333 | orchestrator | 2026-04-05 01:23:03 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-05 01:23:05.494372 | orchestrator | 2026-04-05 01:23:03 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-05 01:23:05.494394 | orchestrator | 2026-04-05 01:23:04 | INFO  | Setting property internal_version: 2026-04-04 2026-04-05 01:23:05.494414 | orchestrator | 2026-04-05 01:23:04 | INFO  | Setting property image_original_user: ubuntu 2026-04-05 01:23:05.494435 | orchestrator | 2026-04-05 01:23:04 | INFO  | Setting property os_version: 2026-04-04 2026-04-05 01:23:05.494455 | orchestrator | 2026-04-05 01:23:04 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260404.qcow2 2026-04-05 01:23:05.494475 | orchestrator | 2026-04-05 01:23:05 | INFO  | Setting property image_build_date: 2026-04-04 2026-04-05 01:23:05.494495 | orchestrator | 2026-04-05 01:23:05 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-04' 2026-04-05 01:23:05.494515 | orchestrator | 2026-04-05 01:23:05 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-04' 2026-04-05 01:23:05.494562 | orchestrator | 2026-04-05 01:23:05 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-05 01:23:05.494576 | orchestrator | 2026-04-05 01:23:05 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-05 01:23:05.494588 | orchestrator | 2026-04-05 01:23:05 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-05 01:23:05.494599 | orchestrator | 2026-04-05 01:23:05 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-05 01:23:05.837446 | orchestrator | ok: Runtime: 0:03:14.850803 2026-04-05 01:23:05.855659 | 2026-04-05 01:23:05.855808 | TASK [Run checks] 2026-04-05 01:23:06.557426 | orchestrator | + set -e 2026-04-05 01:23:06.557672 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 01:23:06.557708 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 01:23:06.557781 | orchestrator | ++ INTERACTIVE=false 2026-04-05 01:23:06.557805 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 01:23:06.557827 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 01:23:06.557850 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 01:23:06.559124 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 01:23:06.565795 | orchestrator | 2026-04-05 01:23:06.565876 | orchestrator | # CHECK 2026-04-05 01:23:06.565887 | orchestrator | 2026-04-05 01:23:06.565933 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:23:06.565947 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:23:06.565957 | orchestrator | + echo 2026-04-05 01:23:06.565965 | orchestrator | + echo '# CHECK' 2026-04-05 01:23:06.565973 | orchestrator | + echo 2026-04-05 01:23:06.565986 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 01:23:06.566629 | orchestrator | ++ semver latest 5.0.0 2026-04-05 01:23:06.635459 | orchestrator | 2026-04-05 01:23:06.635580 | orchestrator | ## Containers @ testbed-manager 2026-04-05 01:23:06.635605 | orchestrator | 2026-04-05 01:23:06.635641 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-05 01:23:06.635662 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 01:23:06.635681 | orchestrator | + echo 2026-04-05 01:23:06.635703 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-05 01:23:06.635724 | orchestrator | + echo 2026-04-05 01:23:06.635744 | orchestrator | + osism container testbed-manager ps 2026-04-05 01:23:07.710993 | orchestrator | 2026-04-05 01:23:07 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-05 01:23:08.126191 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 01:23:08.126313 | orchestrator | ec4489695c8a registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_blackbox_exporter 2026-04-05 01:23:08.126338 | orchestrator | 3271b9f53aa5 registry.osism.tech/kolla/prometheus-alertmanager:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_alertmanager 2026-04-05 01:23:08.126350 | orchestrator | ea81f532c938 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2026-04-05 01:23:08.126369 | orchestrator | 53fae581b12d registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-04-05 01:23:08.126386 | orchestrator | 228e1ce40ab0 registry.osism.tech/kolla/prometheus-server:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_server 2026-04-05 01:23:08.126399 | orchestrator | 73d48912242d registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 19 minutes ago Up 19 minutes cephclient 2026-04-05 01:23:08.126410 | orchestrator | 3ad7ac02edc4 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-04-05 01:23:08.126422 | orchestrator | afa902b8b1ae registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-04-05 01:23:08.126458 | orchestrator | ba3c776f84b1 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-04-05 01:23:08.126470 | orchestrator | 4a6bedf95453 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 33 minutes ago Up 32 minutes (healthy) 80/tcp phpmyadmin 2026-04-05 01:23:08.126481 | orchestrator | 2036f1c329f0 registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 33 minutes ago Up 33 minutes openstackclient 2026-04-05 01:23:08.126492 | orchestrator | cc42eea7e172 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 33 minutes ago Up 33 minutes (healthy) 8080/tcp homer 2026-04-05 01:23:08.126504 | orchestrator | 4e215f4b1cd8 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 57 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-05 01:23:08.126515 | orchestrator | cd86cb612f6e registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 40 minutes (healthy) manager-inventory_reconciler-1 2026-04-05 01:23:08.126527 | orchestrator | 49e480c5c5f2 registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) kolla-ansible 2026-04-05 01:23:08.126565 | orchestrator | 470fcbffde4e registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-kubernetes 2026-04-05 01:23:08.126577 | orchestrator | d707e7fd0dea registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) ceph-ansible 2026-04-05 01:23:08.126589 | orchestrator | bbf2d2850375 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-ansible 2026-04-05 01:23:08.126614 | orchestrator | e9967fab6838 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 41 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-05 01:23:08.126626 | orchestrator | 756aa25e0859 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 41 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-05 01:23:08.126638 | orchestrator | 6d63724dbe42 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-beat-1 2026-04-05 01:23:08.126649 | orchestrator | 23afef509044 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-05 01:23:08.126660 | orchestrator | 94c6c4ea4fb5 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 6379/tcp manager-redis-1 2026-04-05 01:23:08.126680 | orchestrator | e4bc42fcaa5e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-listener-1 2026-04-05 01:23:08.126691 | orchestrator | e189fe53c6a6 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-05 01:23:08.126708 | orchestrator | 967128c3b084 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 41 minutes (healthy) osismclient 2026-04-05 01:23:08.126727 | orchestrator | 934b35edda19 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-openstack-1 2026-04-05 01:23:08.126747 | orchestrator | 352dbc096036 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-flower-1 2026-04-05 01:23:08.126765 | orchestrator | b7e2d86037eb registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-05 01:23:08.277685 | orchestrator | 2026-04-05 01:23:08.277794 | orchestrator | ## Images @ testbed-manager 2026-04-05 01:23:08.277810 | orchestrator | 2026-04-05 01:23:08.277822 | orchestrator | + echo 2026-04-05 01:23:08.277834 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-05 01:23:08.277846 | orchestrator | + echo 2026-04-05 01:23:08.277861 | orchestrator | + osism container testbed-manager images 2026-04-05 01:23:09.738958 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 01:23:09.739072 | orchestrator | registry.osism.tech/osism/osism-ansible latest a367d0d74b25 About an hour ago 638MB 2026-04-05 01:23:09.739090 | orchestrator | registry.osism.tech/osism/kolla-ansible 2025.1 91213f792061 About an hour ago 635MB 2026-04-05 01:23:09.739101 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 9f041662cb1e About an hour ago 1.24GB 2026-04-05 01:23:09.739111 | orchestrator | registry.osism.tech/osism/osism latest 39ad30e360ac About an hour ago 407MB 2026-04-05 01:23:09.739122 | orchestrator | registry.osism.tech/osism/ceph-ansible reef ae335b1618c4 About an hour ago 585MB 2026-04-05 01:23:09.739132 | orchestrator | registry.osism.tech/osism/osism-frontend latest b82497637790 About an hour ago 212MB 2026-04-05 01:23:09.739142 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 64857c311a01 About an hour ago 357MB 2026-04-05 01:23:09.739152 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 49e62433a808 22 hours ago 213MB 2026-04-05 01:23:09.739163 | orchestrator | registry.osism.tech/osism/cephclient reef 0ce6a066fac3 22 hours ago 453MB 2026-04-05 01:23:09.739174 | orchestrator | registry.osism.tech/kolla/cron 2025.1 53831d2a110c 5 days ago 277MB 2026-04-05 01:23:09.739185 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 c170972da654 5 days ago 683MB 2026-04-05 01:23:09.739196 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2025.1 f3b5dcd199ab 5 days ago 319MB 2026-04-05 01:23:09.739206 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 98c589004138 5 days ago 317MB 2026-04-05 01:23:09.739240 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2025.1 f429f961b947 5 days ago 415MB 2026-04-05 01:23:09.739251 | orchestrator | registry.osism.tech/kolla/prometheus-server 2025.1 1ac263a9ab9a 5 days ago 860MB 2026-04-05 01:23:09.739260 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 77360379dc5a 5 days ago 368MB 2026-04-05 01:23:09.739269 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 16094ab8b9a7 7 days ago 590MB 2026-04-05 01:23:09.739279 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-05 01:23:09.739289 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-05 01:23:09.739299 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-05 01:23:09.739308 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-04-05 01:23:09.739318 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-05 01:23:09.739328 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-05 01:23:09.739337 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-05 01:23:09.896726 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 01:23:09.896828 | orchestrator | ++ semver latest 5.0.0 2026-04-05 01:23:09.954629 | orchestrator | 2026-04-05 01:23:09.954723 | orchestrator | ## Containers @ testbed-node-0 2026-04-05 01:23:09.954737 | orchestrator | 2026-04-05 01:23:09.954746 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-05 01:23:09.954755 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 01:23:09.954765 | orchestrator | + echo 2026-04-05 01:23:09.954774 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-05 01:23:09.954784 | orchestrator | + echo 2026-04-05 01:23:09.954793 | orchestrator | + osism container testbed-node-0 ps 2026-04-05 01:23:11.545737 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 01:23:11.545856 | orchestrator | 46dea014d6f2 registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-05 01:23:11.545875 | orchestrator | 1cd5942d01f9 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-05 01:23:11.545888 | orchestrator | 95af7e09cc5a registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-05 01:23:11.545948 | orchestrator | 51c1f7d210f3 registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-05 01:23:11.545961 | orchestrator | 0deded1151cf registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-04-05 01:23:11.545972 | orchestrator | 09b3fdf2f9d1 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-04-05 01:23:11.545984 | orchestrator | 995e04092fb5 registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-05 01:23:11.545996 | orchestrator | 102d0b85c9ec registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-05 01:23:11.546086 | orchestrator | daa527286907 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-04-05 01:23:11.546109 | orchestrator | 5bc9ae21421a registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-05 01:23:11.546318 | orchestrator | 2383195ac339 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-04-05 01:23:11.546343 | orchestrator | 074cd512a29b registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-04-05 01:23:11.546363 | orchestrator | 53b49916ac5a registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-04-05 01:23:11.546381 | orchestrator | d40d0b177755 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-04-05 01:23:11.546399 | orchestrator | 3359b600c86b registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-04-05 01:23:11.546499 | orchestrator | 4d49602c2d35 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_metadata 2026-04-05 01:23:11.546520 | orchestrator | c7a553f3a7f5 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-04-05 01:23:11.546541 | orchestrator | 104ff17c666f registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-05 01:23:11.546559 | orchestrator | 94c2ba29caa7 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-04-05 01:23:11.546576 | orchestrator | 65d582f6f43b registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-05 01:23:11.546588 | orchestrator | 658015efc2e8 registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2026-04-05 01:23:11.546599 | orchestrator | 1aa514393dda registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2026-04-05 01:23:11.546620 | orchestrator | b84b32e6f31d registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2026-04-05 01:23:11.546632 | orchestrator | 065ca8dc8422 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-04-05 01:23:11.546642 | orchestrator | 171957e034d9 registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 14 minutes ago Up 13 minutes (healthy) cinder_backup 2026-04-05 01:23:11.546671 | orchestrator | c71271fa3b09 registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-04-05 01:23:11.546683 | orchestrator | 0b07a23735ca registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2026-04-05 01:23:11.546694 | orchestrator | ca053ffee114 registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2026-04-05 01:23:11.546719 | orchestrator | c30526c0b032 registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2026-04-05 01:23:11.546730 | orchestrator | 3b4b02508480 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2026-04-05 01:23:11.546742 | orchestrator | cd820aea857c registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2026-04-05 01:23:11.546753 | orchestrator | c6b7b74087f4 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2026-04-05 01:23:11.546764 | orchestrator | d1564e086874 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2026-04-05 01:23:11.546777 | orchestrator | 6df91e85229d registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-04-05 01:23:11.546815 | orchestrator | ae67ff4494a6 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-04-05 01:23:11.546848 | orchestrator | 8a45ccb74a53 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-04-05 01:23:11.546868 | orchestrator | df8f30ead823 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-04-05 01:23:11.546886 | orchestrator | a020670f1b6a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-0 2026-04-05 01:23:11.546964 | orchestrator | de6cfa7a5465 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-04-05 01:23:11.546983 | orchestrator | 2c922f7c5173 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-04-05 01:23:11.547001 | orchestrator | 79d6413b2857 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-04-05 01:23:11.547020 | orchestrator | 52ed7f530fa0 registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-04-05 01:23:11.547038 | orchestrator | 238cc05bbd40 registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2026-04-05 01:23:11.547056 | orchestrator | 78d48d18fd30 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-04-05 01:23:11.547074 | orchestrator | a0c6e1c6395e registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-04-05 01:23:11.547091 | orchestrator | fe203a9b11cb registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2026-04-05 01:23:11.547127 | orchestrator | 80dd9babd0f3 registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-04-05 01:23:11.547158 | orchestrator | b06ac54a1fe6 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db_relay_1 2026-04-05 01:23:11.547198 | orchestrator | 805b9c1ab9ff registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2026-04-05 01:23:11.547216 | orchestrator | e23fd02de1d9 registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2026-04-05 01:23:11.547233 | orchestrator | 87bcd926dc00 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2026-04-05 01:23:11.547249 | orchestrator | eb4cfac627d3 registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-04-05 01:23:11.547264 | orchestrator | 780e4c0c8e48 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-04-05 01:23:11.547280 | orchestrator | 98d98dbd9134 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-04-05 01:23:11.547297 | orchestrator | 42666e809ba0 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-04-05 01:23:11.547314 | orchestrator | 0c1225eb3f18 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-04-05 01:23:11.547334 | orchestrator | 7fd7bdf37515 registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-04-05 01:23:11.547349 | orchestrator | 36a4ccf81de4 registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2026-04-05 01:23:11.547365 | orchestrator | ea765d33b535 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-04-05 01:23:11.547383 | orchestrator | 545bf7caffa8 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2026-04-05 01:23:11.547401 | orchestrator | 8b34e588ec50 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-04-05 01:23:11.701014 | orchestrator | 2026-04-05 01:23:11.701139 | orchestrator | ## Images @ testbed-node-0 2026-04-05 01:23:11.701157 | orchestrator | 2026-04-05 01:23:11.701169 | orchestrator | + echo 2026-04-05 01:23:11.701203 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-05 01:23:11.701216 | orchestrator | + echo 2026-04-05 01:23:11.701227 | orchestrator | + osism container testbed-node-0 images 2026-04-05 01:23:13.203409 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 01:23:13.203549 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 9c619feb45c6 22 hours ago 1.35GB 2026-04-05 01:23:13.203573 | orchestrator | registry.osism.tech/kolla/cron 2025.1 53831d2a110c 5 days ago 277MB 2026-04-05 01:23:13.203592 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 95a248a255b0 5 days ago 427MB 2026-04-05 01:23:13.203611 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 055a08d1d646 5 days ago 288MB 2026-04-05 01:23:13.203630 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 cac05c20af97 5 days ago 277MB 2026-04-05 01:23:13.203648 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 c170972da654 5 days ago 683MB 2026-04-05 01:23:13.203706 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 9c6c79e2e193 5 days ago 350MB 2026-04-05 01:23:13.203728 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 4951106b8b70 5 days ago 285MB 2026-04-05 01:23:13.203747 | orchestrator | registry.osism.tech/kolla/redis 2025.1 f2f3f0f280de 5 days ago 284MB 2026-04-05 01:23:13.203767 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 bac3fcf27cf1 5 days ago 284MB 2026-04-05 01:23:13.203787 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 829501547cd8 5 days ago 463MB 2026-04-05 01:23:13.203804 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 9f5afac77e5c 5 days ago 293MB 2026-04-05 01:23:13.203823 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 aef84137d109 5 days ago 293MB 2026-04-05 01:23:13.203836 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 8c8c7462421e 5 days ago 309MB 2026-04-05 01:23:13.203847 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 98c589004138 5 days ago 317MB 2026-04-05 01:23:13.203857 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 fea6a6b33ce4 5 days ago 312MB 2026-04-05 01:23:13.203868 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 5ef617a21b54 5 days ago 303MB 2026-04-05 01:23:13.203879 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 77360379dc5a 5 days ago 368MB 2026-04-05 01:23:13.203889 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 43951a9692de 5 days ago 1.2GB 2026-04-05 01:23:13.203939 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 7a60872df8bd 5 days ago 301MB 2026-04-05 01:23:13.203951 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 b22d6b5967f6 5 days ago 301MB 2026-04-05 01:23:13.203962 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 b0c226bf7131 5 days ago 301MB 2026-04-05 01:23:13.203973 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 be69c3ad4ebc 5 days ago 301MB 2026-04-05 01:23:13.203984 | orchestrator | registry.osism.tech/kolla/aodh-listener 2025.1 094e864fa4b6 5 days ago 995MB 2026-04-05 01:23:13.203995 | orchestrator | registry.osism.tech/kolla/aodh-api 2025.1 d5ddbea139ad 5 days ago 994MB 2026-04-05 01:23:13.204009 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2025.1 419c6f4acdd0 5 days ago 995MB 2026-04-05 01:23:13.204020 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2025.1 b130f227014d 5 days ago 995MB 2026-04-05 01:23:13.204031 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 960aa6a4a8de 5 days ago 996MB 2026-04-05 01:23:13.204042 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 369b7ddbf017 5 days ago 1.12GB 2026-04-05 01:23:13.204053 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 d8e83229f11e 5 days ago 1.23GB 2026-04-05 01:23:13.204063 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 3fe2b0b8cfee 5 days ago 1.39GB 2026-04-05 01:23:13.204074 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 158e57839a6b 5 days ago 1.23GB 2026-04-05 01:23:13.204085 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 5e0b67322fbf 5 days ago 1.23GB 2026-04-05 01:23:13.204096 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 e83ea289f589 5 days ago 1.05GB 2026-04-05 01:23:13.204107 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 51936f6dc571 5 days ago 1.05GB 2026-04-05 01:23:13.204118 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 a8a7290762c3 5 days ago 1.07GB 2026-04-05 01:23:13.204159 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 07f1afcad488 5 days ago 1.05GB 2026-04-05 01:23:13.204171 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 0d3b38b2c976 5 days ago 1.07GB 2026-04-05 01:23:13.204182 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 afd6512aefbf 5 days ago 1.43GB 2026-04-05 01:23:13.204193 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 cb4b4f730395 5 days ago 1.43GB 2026-04-05 01:23:13.204204 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 db4179e1711f 5 days ago 1.79GB 2026-04-05 01:23:13.204215 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 d0319351afef 5 days ago 1.44GB 2026-04-05 01:23:13.204226 | orchestrator | registry.osism.tech/kolla/skyline-console 2025.1 778a5c2a7676 5 days ago 1.07GB 2026-04-05 01:23:13.204237 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2025.1 06549fefcbea 5 days ago 1.02GB 2026-04-05 01:23:13.204248 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2025.1 8dad295e99c2 5 days ago 997MB 2026-04-05 01:23:13.204258 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2025.1 e6e7fe48c025 5 days ago 996MB 2026-04-05 01:23:13.204269 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 1c66eb60d90d 5 days ago 1.06GB 2026-04-05 01:23:13.204280 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 f3aeb32a6011 5 days ago 1.05GB 2026-04-05 01:23:13.204291 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 d2329ba4a45d 5 days ago 1.09GB 2026-04-05 01:23:13.204302 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 d7fa3d0ffbc8 5 days ago 1.27GB 2026-04-05 01:23:13.204313 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 76d813dd9361 5 days ago 1.15GB 2026-04-05 01:23:13.204324 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 fffdc676c6f3 5 days ago 1.01GB 2026-04-05 01:23:13.204335 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 f1774da93f29 5 days ago 1GB 2026-04-05 01:23:13.204346 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 313130236671 5 days ago 1GB 2026-04-05 01:23:13.204367 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 e4df27d536ad 5 days ago 1GB 2026-04-05 01:23:13.204405 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 da5b5dd8f0f8 5 days ago 1.01GB 2026-04-05 01:23:13.204424 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 981f1c0984fd 5 days ago 1GB 2026-04-05 01:23:13.204441 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 77837382f9b4 5 days ago 1.24GB 2026-04-05 01:23:13.204461 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 2f7505ba4454 5 days ago 1GB 2026-04-05 01:23:13.204479 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 68100f4cfa52 5 days ago 1GB 2026-04-05 01:23:13.204500 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 6e6f7bfebcca 5 days ago 1GB 2026-04-05 01:23:13.204516 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 33ee60a7efe8 5 days ago 301MB 2026-04-05 01:23:13.204527 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 d1fff501712d 7 days ago 1.54GB 2026-04-05 01:23:13.204537 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 26c9d21ae7a0 7 days ago 1.57GB 2026-04-05 01:23:13.204548 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 16094ab8b9a7 7 days ago 590MB 2026-04-05 01:23:13.204568 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 797914887ee8 7 days ago 1.04GB 2026-04-05 01:23:13.351816 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 01:23:13.351946 | orchestrator | ++ semver latest 5.0.0 2026-04-05 01:23:13.405201 | orchestrator | 2026-04-05 01:23:13.405301 | orchestrator | ## Containers @ testbed-node-1 2026-04-05 01:23:13.405317 | orchestrator | 2026-04-05 01:23:13.405329 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-05 01:23:13.405340 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 01:23:13.405352 | orchestrator | + echo 2026-04-05 01:23:13.405364 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-05 01:23:13.405376 | orchestrator | + echo 2026-04-05 01:23:13.405387 | orchestrator | + osism container testbed-node-1 ps 2026-04-05 01:23:14.977643 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 01:23:14.977749 | orchestrator | 4198e98f0da8 registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-05 01:23:14.977767 | orchestrator | 4c9d86fbda30 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-05 01:23:14.977779 | orchestrator | 9a7acfa570be registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-05 01:23:14.977790 | orchestrator | 9f113cee2ca1 registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-05 01:23:14.977801 | orchestrator | 483fd4f43292 registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-04-05 01:23:14.977817 | orchestrator | 85c8eb8dd6b5 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-04-05 01:23:14.977828 | orchestrator | 003cc7e61762 registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-05 01:23:14.977839 | orchestrator | 3167a2644b0a registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-05 01:23:14.977850 | orchestrator | cf4d1d965cc7 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-04-05 01:23:14.977861 | orchestrator | 0f0eb1a8f4ea registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-04-05 01:23:14.977872 | orchestrator | b50a628c7ab3 registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-05 01:23:14.977958 | orchestrator | 247d6a6fc97c registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-04-05 01:23:14.977972 | orchestrator | c1bab64eb627 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-04-05 01:23:14.977983 | orchestrator | a9c5ffd8591d registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-04-05 01:23:14.977995 | orchestrator | 2375c54b8ccf registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-04-05 01:23:14.978005 | orchestrator | 83774095a455 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-04-05 01:23:14.978087 | orchestrator | 8a3c65caa1fe registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_metadata 2026-04-05 01:23:14.978101 | orchestrator | b52fd0546f33 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-04-05 01:23:14.978112 | orchestrator | c96974d46778 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-05 01:23:14.978123 | orchestrator | 35678b564634 registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-05 01:23:14.978134 | orchestrator | 27d7da0acac9 registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2026-04-05 01:23:14.978165 | orchestrator | 4f10fc0321f4 registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2026-04-05 01:23:14.978177 | orchestrator | 88fbc6ba5231 registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2026-04-05 01:23:14.978188 | orchestrator | a236b5e970e8 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-04-05 01:23:14.978199 | orchestrator | 6e8fbadc8a2b registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_backup 2026-04-05 01:23:14.978212 | orchestrator | 7b5ab6431103 registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-04-05 01:23:14.978225 | orchestrator | 5e576ecefc3b registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2026-04-05 01:23:14.978237 | orchestrator | f586dcfb1865 registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2026-04-05 01:23:14.978250 | orchestrator | d472961176eb registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-04-05 01:23:14.978263 | orchestrator | d55dff0ce811 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2026-04-05 01:23:14.978277 | orchestrator | 284e45391fad registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2026-04-05 01:23:14.978290 | orchestrator | dfce7811f587 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2026-04-05 01:23:14.978303 | orchestrator | 93d822a76786 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2026-04-05 01:23:14.978316 | orchestrator | d3e33be33d64 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-04-05 01:23:14.978337 | orchestrator | fa6c95498327 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-04-05 01:23:14.978360 | orchestrator | dc973435876e registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-04-05 01:23:14.978374 | orchestrator | 9476bc226d61 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-04-05 01:23:14.978387 | orchestrator | b47785c52f19 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-04-05 01:23:14.978400 | orchestrator | 548da66521d6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-1 2026-04-05 01:23:14.978412 | orchestrator | 90ecb033509c registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-04-05 01:23:14.978425 | orchestrator | eb8e499f8f57 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-04-05 01:23:14.978438 | orchestrator | 4ea43b8b3802 registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-04-05 01:23:14.978451 | orchestrator | 27704500955e registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2026-04-05 01:23:14.978464 | orchestrator | 8cd4ea074cbe registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-04-05 01:23:14.978484 | orchestrator | 515038b7dfd9 registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-04-05 01:23:14.978498 | orchestrator | c1762350f3f6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2026-04-05 01:23:14.978511 | orchestrator | c4fecfbc13c2 registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-04-05 01:23:14.978524 | orchestrator | 24a02b0592c6 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db_relay_1 2026-04-05 01:23:14.978536 | orchestrator | 4d83e1214b21 registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 25 minutes ovn_sb_db 2026-04-05 01:23:14.978549 | orchestrator | e432cb7f8dff registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 25 minutes ovn_nb_db 2026-04-05 01:23:14.978562 | orchestrator | bc2dfe00ebd9 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-04-05 01:23:14.978576 | orchestrator | c005d7d07139 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2026-04-05 01:23:14.978589 | orchestrator | 7a5b636ae0a5 registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-04-05 01:23:14.978601 | orchestrator | 21026da58c88 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-04-05 01:23:14.978612 | orchestrator | 2a0d7ef5e836 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-04-05 01:23:14.978629 | orchestrator | 4120dbe54568 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-04-05 01:23:14.978640 | orchestrator | 78be6d74004c registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-04-05 01:23:14.978651 | orchestrator | e013333a3874 registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2026-04-05 01:23:14.978662 | orchestrator | ca57405bee6e registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-04-05 01:23:14.978674 | orchestrator | ffb45de95434 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-04-05 01:23:14.978692 | orchestrator | 7a2d7399041e registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-04-05 01:23:15.139422 | orchestrator | 2026-04-05 01:23:15.139541 | orchestrator | ## Images @ testbed-node-1 2026-04-05 01:23:15.139558 | orchestrator | 2026-04-05 01:23:15.139568 | orchestrator | + echo 2026-04-05 01:23:15.139577 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-05 01:23:15.139588 | orchestrator | + echo 2026-04-05 01:23:15.139597 | orchestrator | + osism container testbed-node-1 images 2026-04-05 01:23:16.641864 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 01:23:16.642001 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 9c619feb45c6 22 hours ago 1.35GB 2026-04-05 01:23:16.642060 | orchestrator | registry.osism.tech/kolla/cron 2025.1 53831d2a110c 5 days ago 277MB 2026-04-05 01:23:16.642088 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 95a248a255b0 5 days ago 427MB 2026-04-05 01:23:16.642099 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 055a08d1d646 5 days ago 288MB 2026-04-05 01:23:16.642109 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 cac05c20af97 5 days ago 277MB 2026-04-05 01:23:16.642119 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 c170972da654 5 days ago 683MB 2026-04-05 01:23:16.642128 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 9c6c79e2e193 5 days ago 350MB 2026-04-05 01:23:16.642138 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 4951106b8b70 5 days ago 285MB 2026-04-05 01:23:16.642148 | orchestrator | registry.osism.tech/kolla/redis 2025.1 f2f3f0f280de 5 days ago 284MB 2026-04-05 01:23:16.642157 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 bac3fcf27cf1 5 days ago 284MB 2026-04-05 01:23:16.642167 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 829501547cd8 5 days ago 463MB 2026-04-05 01:23:16.642176 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 9f5afac77e5c 5 days ago 293MB 2026-04-05 01:23:16.642186 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 aef84137d109 5 days ago 293MB 2026-04-05 01:23:16.642196 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 8c8c7462421e 5 days ago 309MB 2026-04-05 01:23:16.642205 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 98c589004138 5 days ago 317MB 2026-04-05 01:23:16.642215 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 fea6a6b33ce4 5 days ago 312MB 2026-04-05 01:23:16.642224 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 77360379dc5a 5 days ago 368MB 2026-04-05 01:23:16.642262 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 5ef617a21b54 5 days ago 303MB 2026-04-05 01:23:16.642280 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 43951a9692de 5 days ago 1.2GB 2026-04-05 01:23:16.642296 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 7a60872df8bd 5 days ago 301MB 2026-04-05 01:23:16.642312 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 b22d6b5967f6 5 days ago 301MB 2026-04-05 01:23:16.642329 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 b0c226bf7131 5 days ago 301MB 2026-04-05 01:23:16.642346 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 be69c3ad4ebc 5 days ago 301MB 2026-04-05 01:23:16.642362 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 960aa6a4a8de 5 days ago 996MB 2026-04-05 01:23:16.642380 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 369b7ddbf017 5 days ago 1.12GB 2026-04-05 01:23:16.642397 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 d8e83229f11e 5 days ago 1.23GB 2026-04-05 01:23:16.642416 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 3fe2b0b8cfee 5 days ago 1.39GB 2026-04-05 01:23:16.642428 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 158e57839a6b 5 days ago 1.23GB 2026-04-05 01:23:16.642439 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 5e0b67322fbf 5 days ago 1.23GB 2026-04-05 01:23:16.642450 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 e83ea289f589 5 days ago 1.05GB 2026-04-05 01:23:16.642463 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 51936f6dc571 5 days ago 1.05GB 2026-04-05 01:23:16.642476 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 a8a7290762c3 5 days ago 1.07GB 2026-04-05 01:23:16.642488 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 07f1afcad488 5 days ago 1.05GB 2026-04-05 01:23:16.642501 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 0d3b38b2c976 5 days ago 1.07GB 2026-04-05 01:23:16.642515 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 afd6512aefbf 5 days ago 1.43GB 2026-04-05 01:23:16.642527 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 cb4b4f730395 5 days ago 1.43GB 2026-04-05 01:23:16.642539 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 db4179e1711f 5 days ago 1.79GB 2026-04-05 01:23:16.642568 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 d0319351afef 5 days ago 1.44GB 2026-04-05 01:23:16.642581 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 1c66eb60d90d 5 days ago 1.06GB 2026-04-05 01:23:16.642594 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 f3aeb32a6011 5 days ago 1.05GB 2026-04-05 01:23:16.642608 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 d2329ba4a45d 5 days ago 1.09GB 2026-04-05 01:23:16.642620 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 d7fa3d0ffbc8 5 days ago 1.27GB 2026-04-05 01:23:16.642633 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 76d813dd9361 5 days ago 1.15GB 2026-04-05 01:23:16.642652 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 fffdc676c6f3 5 days ago 1.01GB 2026-04-05 01:23:16.642671 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 f1774da93f29 5 days ago 1GB 2026-04-05 01:23:16.642690 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 313130236671 5 days ago 1GB 2026-04-05 01:23:16.642721 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 e4df27d536ad 5 days ago 1GB 2026-04-05 01:23:16.642744 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 da5b5dd8f0f8 5 days ago 1.01GB 2026-04-05 01:23:16.642756 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 981f1c0984fd 5 days ago 1GB 2026-04-05 01:23:16.642767 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 77837382f9b4 5 days ago 1.24GB 2026-04-05 01:23:16.642777 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 2f7505ba4454 5 days ago 1GB 2026-04-05 01:23:16.642788 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 68100f4cfa52 5 days ago 1GB 2026-04-05 01:23:16.642799 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 6e6f7bfebcca 5 days ago 1GB 2026-04-05 01:23:16.642810 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 33ee60a7efe8 5 days ago 301MB 2026-04-05 01:23:16.642821 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 d1fff501712d 7 days ago 1.54GB 2026-04-05 01:23:16.642832 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 26c9d21ae7a0 7 days ago 1.57GB 2026-04-05 01:23:16.642843 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 16094ab8b9a7 7 days ago 590MB 2026-04-05 01:23:16.642854 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 797914887ee8 7 days ago 1.04GB 2026-04-05 01:23:16.807526 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 01:23:16.808208 | orchestrator | ++ semver latest 5.0.0 2026-04-05 01:23:16.868689 | orchestrator | 2026-04-05 01:23:16.868810 | orchestrator | ## Containers @ testbed-node-2 2026-04-05 01:23:16.868836 | orchestrator | 2026-04-05 01:23:16.868855 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-05 01:23:16.868876 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 01:23:16.868889 | orchestrator | + echo 2026-04-05 01:23:16.868955 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-05 01:23:16.868970 | orchestrator | + echo 2026-04-05 01:23:16.868995 | orchestrator | + osism container testbed-node-2 ps 2026-04-05 01:23:18.418578 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 01:23:18.418699 | orchestrator | 2c4ad0d03578 registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-05 01:23:18.418723 | orchestrator | 03fe2032f51b registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-05 01:23:18.418739 | orchestrator | c085f0549a78 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-05 01:23:18.418754 | orchestrator | f20e7bd75851 registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 5 minutes ago Up 4 minutes octavia_driver_agent 2026-04-05 01:23:18.418769 | orchestrator | 4a8edb5fe738 registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-04-05 01:23:18.418785 | orchestrator | f5350ca0ff54 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-04-05 01:23:18.418799 | orchestrator | 1e0cae8fca9f registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-05 01:23:18.418815 | orchestrator | 40a19b719a2d registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-05 01:23:18.418829 | orchestrator | cb242ca70b90 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-04-05 01:23:18.418869 | orchestrator | 44f6d76afae3 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-04-05 01:23:18.418879 | orchestrator | 12680f8e7ee8 registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-05 01:23:18.418888 | orchestrator | ea9cc81423b7 registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-04-05 01:23:18.418896 | orchestrator | 7ad70a162d0a registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-04-05 01:23:18.418966 | orchestrator | f4562c63fe8e registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-04-05 01:23:18.418975 | orchestrator | a3abde6e2c8f registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-04-05 01:23:18.418983 | orchestrator | bf7fc827a30c registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-04-05 01:23:18.418992 | orchestrator | f38a887593bb registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_metadata 2026-04-05 01:23:18.419000 | orchestrator | e5249c064497 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-04-05 01:23:18.419008 | orchestrator | 0658efd5e07c registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-05 01:23:18.419017 | orchestrator | 13d216ff13ad registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-05 01:23:18.419026 | orchestrator | 7b4b0cf69dbf registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2026-04-05 01:23:18.419052 | orchestrator | b5c8c9ca5134 registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2026-04-05 01:23:18.419061 | orchestrator | e5e8cf3a59a4 registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2026-04-05 01:23:18.419086 | orchestrator | 30f5db557e71 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-04-05 01:23:18.419101 | orchestrator | 7ca0668f591f registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_backup 2026-04-05 01:23:18.419116 | orchestrator | eaa6be0bdd4e registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-04-05 01:23:18.419130 | orchestrator | 1efbef2aec6d registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2026-04-05 01:23:18.419144 | orchestrator | 36f8fb5c908e registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2026-04-05 01:23:18.419230 | orchestrator | d35b49a36c30 registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-04-05 01:23:18.419246 | orchestrator | 0e9e1e61f6d3 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2026-04-05 01:23:18.419256 | orchestrator | e0e245796ee2 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2026-04-05 01:23:18.419267 | orchestrator | 14e41906c690 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2026-04-05 01:23:18.419283 | orchestrator | c9e4e69898b1 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2026-04-05 01:23:18.419293 | orchestrator | 570d9be18fee registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-04-05 01:23:18.419304 | orchestrator | 5274e978b7c1 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-04-05 01:23:18.419314 | orchestrator | 05bc0bdeb4d2 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-04-05 01:23:18.419324 | orchestrator | 4e712d7e8cf6 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-04-05 01:23:18.419334 | orchestrator | 6adf6aa100cd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-2 2026-04-05 01:23:18.419344 | orchestrator | d1e32025e13b registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-04-05 01:23:18.419355 | orchestrator | 0517cc2212f9 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-04-05 01:23:18.419366 | orchestrator | ca1ae96cd559 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-04-05 01:23:18.419376 | orchestrator | 2f339dd65e7d registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-04-05 01:23:18.419386 | orchestrator | 50e70d257db4 registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2026-04-05 01:23:18.419396 | orchestrator | a52efd135d52 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-04-05 01:23:18.419416 | orchestrator | b387bfe71120 registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-04-05 01:23:18.419427 | orchestrator | 5320328681ca registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2026-04-05 01:23:18.419437 | orchestrator | 419245cdf4f1 registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-04-05 01:23:18.419447 | orchestrator | f2c5cb937b37 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db_relay_1 2026-04-05 01:23:18.419462 | orchestrator | 2854c138aa11 registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 25 minutes ovn_sb_db 2026-04-05 01:23:18.419471 | orchestrator | 778bdc65ce54 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-04-05 01:23:18.419479 | orchestrator | 28ac111bec52 registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 25 minutes ovn_nb_db 2026-04-05 01:23:18.419488 | orchestrator | edf2d16fd18e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2026-04-05 01:23:18.419497 | orchestrator | fced914ab6c8 registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-04-05 01:23:18.419505 | orchestrator | 702578ad39de registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-04-05 01:23:18.419514 | orchestrator | 890d86d2fb93 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-04-05 01:23:18.419522 | orchestrator | bcc02edea9c3 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-04-05 01:23:18.419531 | orchestrator | 0098b4fbe2ae registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-04-05 01:23:18.419539 | orchestrator | 40d1295ea3c2 registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2026-04-05 01:23:18.419548 | orchestrator | 73f338bfab31 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-04-05 01:23:18.419557 | orchestrator | 26e34392773f registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-04-05 01:23:18.419570 | orchestrator | aab174926e31 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-04-05 01:23:18.584513 | orchestrator | 2026-04-05 01:23:18.584603 | orchestrator | ## Images @ testbed-node-2 2026-04-05 01:23:18.584616 | orchestrator | 2026-04-05 01:23:18.584627 | orchestrator | + echo 2026-04-05 01:23:18.584636 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-05 01:23:18.584649 | orchestrator | + echo 2026-04-05 01:23:18.584661 | orchestrator | + osism container testbed-node-2 images 2026-04-05 01:23:20.082516 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 01:23:20.082618 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 9c619feb45c6 22 hours ago 1.35GB 2026-04-05 01:23:20.082632 | orchestrator | registry.osism.tech/kolla/cron 2025.1 53831d2a110c 5 days ago 277MB 2026-04-05 01:23:20.082644 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 95a248a255b0 5 days ago 427MB 2026-04-05 01:23:20.082655 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 055a08d1d646 5 days ago 288MB 2026-04-05 01:23:20.082666 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 cac05c20af97 5 days ago 277MB 2026-04-05 01:23:20.082677 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 9c6c79e2e193 5 days ago 350MB 2026-04-05 01:23:20.082688 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 c170972da654 5 days ago 683MB 2026-04-05 01:23:20.082724 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 4951106b8b70 5 days ago 285MB 2026-04-05 01:23:20.082736 | orchestrator | registry.osism.tech/kolla/redis 2025.1 f2f3f0f280de 5 days ago 284MB 2026-04-05 01:23:20.082748 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 bac3fcf27cf1 5 days ago 284MB 2026-04-05 01:23:20.082759 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 829501547cd8 5 days ago 463MB 2026-04-05 01:23:20.082769 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 9f5afac77e5c 5 days ago 293MB 2026-04-05 01:23:20.082780 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 aef84137d109 5 days ago 293MB 2026-04-05 01:23:20.082791 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 8c8c7462421e 5 days ago 309MB 2026-04-05 01:23:20.082802 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 98c589004138 5 days ago 317MB 2026-04-05 01:23:20.082813 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 fea6a6b33ce4 5 days ago 312MB 2026-04-05 01:23:20.082824 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 77360379dc5a 5 days ago 368MB 2026-04-05 01:23:20.082835 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 5ef617a21b54 5 days ago 303MB 2026-04-05 01:23:20.082846 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 43951a9692de 5 days ago 1.2GB 2026-04-05 01:23:20.082857 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 7a60872df8bd 5 days ago 301MB 2026-04-05 01:23:20.082868 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 b22d6b5967f6 5 days ago 301MB 2026-04-05 01:23:20.082879 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 b0c226bf7131 5 days ago 301MB 2026-04-05 01:23:20.082890 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 be69c3ad4ebc 5 days ago 301MB 2026-04-05 01:23:20.082981 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 960aa6a4a8de 5 days ago 996MB 2026-04-05 01:23:20.082998 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 369b7ddbf017 5 days ago 1.12GB 2026-04-05 01:23:20.083009 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 d8e83229f11e 5 days ago 1.23GB 2026-04-05 01:23:20.083035 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 3fe2b0b8cfee 5 days ago 1.39GB 2026-04-05 01:23:20.083047 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 158e57839a6b 5 days ago 1.23GB 2026-04-05 01:23:20.083058 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 5e0b67322fbf 5 days ago 1.23GB 2026-04-05 01:23:20.083068 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 e83ea289f589 5 days ago 1.05GB 2026-04-05 01:23:20.083082 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 51936f6dc571 5 days ago 1.05GB 2026-04-05 01:23:20.083094 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 a8a7290762c3 5 days ago 1.07GB 2026-04-05 01:23:20.083107 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 07f1afcad488 5 days ago 1.05GB 2026-04-05 01:23:20.083120 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 0d3b38b2c976 5 days ago 1.07GB 2026-04-05 01:23:20.083133 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 afd6512aefbf 5 days ago 1.43GB 2026-04-05 01:23:20.083146 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 cb4b4f730395 5 days ago 1.43GB 2026-04-05 01:23:20.083187 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 db4179e1711f 5 days ago 1.79GB 2026-04-05 01:23:20.083201 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 d0319351afef 5 days ago 1.44GB 2026-04-05 01:23:20.083213 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 1c66eb60d90d 5 days ago 1.06GB 2026-04-05 01:23:20.083227 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 f3aeb32a6011 5 days ago 1.05GB 2026-04-05 01:23:20.083246 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 d2329ba4a45d 5 days ago 1.09GB 2026-04-05 01:23:20.083264 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 d7fa3d0ffbc8 5 days ago 1.27GB 2026-04-05 01:23:20.083282 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 76d813dd9361 5 days ago 1.15GB 2026-04-05 01:23:20.083300 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 fffdc676c6f3 5 days ago 1.01GB 2026-04-05 01:23:20.083319 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 f1774da93f29 5 days ago 1GB 2026-04-05 01:23:20.083340 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 313130236671 5 days ago 1GB 2026-04-05 01:23:20.083360 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 e4df27d536ad 5 days ago 1GB 2026-04-05 01:23:20.083379 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 da5b5dd8f0f8 5 days ago 1.01GB 2026-04-05 01:23:20.083398 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 981f1c0984fd 5 days ago 1GB 2026-04-05 01:23:20.083418 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 77837382f9b4 5 days ago 1.24GB 2026-04-05 01:23:20.083438 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 2f7505ba4454 5 days ago 1GB 2026-04-05 01:23:20.083457 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 68100f4cfa52 5 days ago 1GB 2026-04-05 01:23:20.083483 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 6e6f7bfebcca 5 days ago 1GB 2026-04-05 01:23:20.083496 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 33ee60a7efe8 5 days ago 301MB 2026-04-05 01:23:20.083508 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 d1fff501712d 7 days ago 1.54GB 2026-04-05 01:23:20.083519 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 26c9d21ae7a0 7 days ago 1.57GB 2026-04-05 01:23:20.083530 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 16094ab8b9a7 7 days ago 590MB 2026-04-05 01:23:20.083541 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 797914887ee8 7 days ago 1.04GB 2026-04-05 01:23:20.245879 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-05 01:23:20.252573 | orchestrator | + set -e 2026-04-05 01:23:20.252621 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 01:23:20.254338 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 01:23:20.254445 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 01:23:20.254477 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 01:23:20.254679 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 01:23:20.254711 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 01:23:20.254749 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 01:23:20.254781 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:23:20.254800 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:23:20.254817 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-05 01:23:20.254836 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-05 01:23:20.254856 | orchestrator | ++ export ARA=false 2026-04-05 01:23:20.254875 | orchestrator | ++ ARA=false 2026-04-05 01:23:20.254897 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 01:23:20.254946 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 01:23:20.254957 | orchestrator | ++ export TEMPEST=true 2026-04-05 01:23:20.254968 | orchestrator | ++ TEMPEST=true 2026-04-05 01:23:20.254979 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 01:23:20.255017 | orchestrator | ++ IS_ZUUL=true 2026-04-05 01:23:20.255028 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 01:23:20.255039 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 01:23:20.255050 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 01:23:20.255061 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 01:23:20.255072 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 01:23:20.255083 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 01:23:20.255094 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 01:23:20.255105 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 01:23:20.255115 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 01:23:20.255127 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 01:23:20.255138 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 01:23:20.255166 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-05 01:23:20.261203 | orchestrator | + set -e 2026-04-05 01:23:20.261266 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 01:23:20.261278 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 01:23:20.261291 | orchestrator | ++ INTERACTIVE=false 2026-04-05 01:23:20.261301 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 01:23:20.261312 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 01:23:20.261324 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 01:23:20.261857 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 01:23:20.266233 | orchestrator | 2026-04-05 01:23:20.266266 | orchestrator | # Ceph status 2026-04-05 01:23:20.266278 | orchestrator | 2026-04-05 01:23:20.266290 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:23:20.266301 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:23:20.266313 | orchestrator | + echo 2026-04-05 01:23:20.266324 | orchestrator | + echo '# Ceph status' 2026-04-05 01:23:20.266336 | orchestrator | + echo 2026-04-05 01:23:20.266347 | orchestrator | + ceph -s 2026-04-05 01:23:20.838289 | orchestrator | cluster: 2026-04-05 01:23:20.838400 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-05 01:23:20.838417 | orchestrator | health: HEALTH_OK 2026-04-05 01:23:20.838430 | orchestrator | 2026-04-05 01:23:20.838443 | orchestrator | services: 2026-04-05 01:23:20.838454 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2026-04-05 01:23:20.838467 | orchestrator | mgr: testbed-node-0(active, since 18m), standbys: testbed-node-1, testbed-node-2 2026-04-05 01:23:20.838479 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-05 01:23:20.838489 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2026-04-05 01:23:20.838501 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-05 01:23:20.838511 | orchestrator | 2026-04-05 01:23:20.838522 | orchestrator | data: 2026-04-05 01:23:20.838533 | orchestrator | volumes: 1/1 healthy 2026-04-05 01:23:20.838544 | orchestrator | pools: 14 pools, 401 pgs 2026-04-05 01:23:20.838555 | orchestrator | objects: 556 objects, 2.2 GiB 2026-04-05 01:23:20.838566 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-05 01:23:20.838577 | orchestrator | pgs: 401 active+clean 2026-04-05 01:23:20.838588 | orchestrator | 2026-04-05 01:23:20.881774 | orchestrator | 2026-04-05 01:23:20.881891 | orchestrator | # Ceph versions 2026-04-05 01:23:20.881942 | orchestrator | 2026-04-05 01:23:20.881962 | orchestrator | + echo 2026-04-05 01:23:20.881980 | orchestrator | + echo '# Ceph versions' 2026-04-05 01:23:20.882001 | orchestrator | + echo 2026-04-05 01:23:20.882070 | orchestrator | + ceph versions 2026-04-05 01:23:21.563401 | orchestrator | { 2026-04-05 01:23:21.563505 | orchestrator | "mon": { 2026-04-05 01:23:21.563525 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-05 01:23:21.563539 | orchestrator | }, 2026-04-05 01:23:21.563552 | orchestrator | "mgr": { 2026-04-05 01:23:21.563565 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-05 01:23:21.563577 | orchestrator | }, 2026-04-05 01:23:21.563590 | orchestrator | "osd": { 2026-04-05 01:23:21.563602 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-05 01:23:21.563614 | orchestrator | }, 2026-04-05 01:23:21.563627 | orchestrator | "mds": { 2026-04-05 01:23:21.563639 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-05 01:23:21.563651 | orchestrator | }, 2026-04-05 01:23:21.563661 | orchestrator | "rgw": { 2026-04-05 01:23:21.563673 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-05 01:23:21.563719 | orchestrator | }, 2026-04-05 01:23:21.563732 | orchestrator | "overall": { 2026-04-05 01:23:21.563745 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-05 01:23:21.563757 | orchestrator | } 2026-04-05 01:23:21.563768 | orchestrator | } 2026-04-05 01:23:21.611988 | orchestrator | 2026-04-05 01:23:21.612827 | orchestrator | # Ceph OSD tree 2026-04-05 01:23:21.612850 | orchestrator | 2026-04-05 01:23:21.612858 | orchestrator | + echo 2026-04-05 01:23:21.612867 | orchestrator | + echo '# Ceph OSD tree' 2026-04-05 01:23:21.612877 | orchestrator | + echo 2026-04-05 01:23:21.612886 | orchestrator | + ceph osd df tree 2026-04-05 01:23:22.222355 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-05 01:23:22.222487 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 400 MiB 113 GiB 5.89 1.00 - root default 2026-04-05 01:23:22.222510 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 121 MiB 38 GiB 5.86 0.99 - host testbed-node-3 2026-04-05 01:23:22.222530 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 52 MiB 18 GiB 7.53 1.28 200 up osd.0 2026-04-05 01:23:22.222549 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 855 MiB 786 MiB 1 KiB 70 MiB 19 GiB 4.18 0.71 190 up osd.4 2026-04-05 01:23:22.222566 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.90 1.00 - host testbed-node-4 2026-04-05 01:23:22.222585 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1000 MiB 930 MiB 1 KiB 70 MiB 19 GiB 4.89 0.83 176 up osd.1 2026-04-05 01:23:22.222603 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.92 1.18 216 up osd.3 2026-04-05 01:23:22.222621 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.90 1.00 - host testbed-node-5 2026-04-05 01:23:22.222639 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.33 1.08 191 up osd.2 2026-04-05 01:23:22.222657 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.47 0.93 197 up osd.5 2026-04-05 01:23:22.222674 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 400 MiB 113 GiB 5.89 2026-04-05 01:23:22.222692 | orchestrator | MIN/MAX VAR: 0.71/1.28 STDDEV: 1.16 2026-04-05 01:23:22.267761 | orchestrator | 2026-04-05 01:23:22.267872 | orchestrator | # Ceph monitor status 2026-04-05 01:23:22.267894 | orchestrator | 2026-04-05 01:23:22.267971 | orchestrator | + echo 2026-04-05 01:23:22.267987 | orchestrator | + echo '# Ceph monitor status' 2026-04-05 01:23:22.268002 | orchestrator | + echo 2026-04-05 01:23:22.268015 | orchestrator | + ceph mon stat 2026-04-05 01:23:22.872411 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-05 01:23:22.926672 | orchestrator | 2026-04-05 01:23:22.926983 | orchestrator | # Ceph quorum status 2026-04-05 01:23:22.927018 | orchestrator | 2026-04-05 01:23:22.927031 | orchestrator | + echo 2026-04-05 01:23:22.927043 | orchestrator | + echo '# Ceph quorum status' 2026-04-05 01:23:22.927055 | orchestrator | + echo 2026-04-05 01:23:22.927080 | orchestrator | + ceph quorum_status 2026-04-05 01:23:22.927337 | orchestrator | + jq 2026-04-05 01:23:23.617016 | orchestrator | { 2026-04-05 01:23:23.617126 | orchestrator | "election_epoch": 8, 2026-04-05 01:23:23.617142 | orchestrator | "quorum": [ 2026-04-05 01:23:23.617155 | orchestrator | 0, 2026-04-05 01:23:23.617166 | orchestrator | 1, 2026-04-05 01:23:23.617177 | orchestrator | 2 2026-04-05 01:23:23.617188 | orchestrator | ], 2026-04-05 01:23:23.617199 | orchestrator | "quorum_names": [ 2026-04-05 01:23:23.617210 | orchestrator | "testbed-node-0", 2026-04-05 01:23:23.617222 | orchestrator | "testbed-node-1", 2026-04-05 01:23:23.617233 | orchestrator | "testbed-node-2" 2026-04-05 01:23:23.617244 | orchestrator | ], 2026-04-05 01:23:23.617279 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-05 01:23:23.617292 | orchestrator | "quorum_age": 1737, 2026-04-05 01:23:23.617303 | orchestrator | "features": { 2026-04-05 01:23:23.617314 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-05 01:23:23.617325 | orchestrator | "quorum_mon": [ 2026-04-05 01:23:23.617336 | orchestrator | "kraken", 2026-04-05 01:23:23.617347 | orchestrator | "luminous", 2026-04-05 01:23:23.617358 | orchestrator | "mimic", 2026-04-05 01:23:23.617368 | orchestrator | "osdmap-prune", 2026-04-05 01:23:23.617379 | orchestrator | "nautilus", 2026-04-05 01:23:23.617390 | orchestrator | "octopus", 2026-04-05 01:23:23.617401 | orchestrator | "pacific", 2026-04-05 01:23:23.617411 | orchestrator | "elector-pinging", 2026-04-05 01:23:23.617422 | orchestrator | "quincy", 2026-04-05 01:23:23.617433 | orchestrator | "reef" 2026-04-05 01:23:23.617444 | orchestrator | ] 2026-04-05 01:23:23.617454 | orchestrator | }, 2026-04-05 01:23:23.617465 | orchestrator | "monmap": { 2026-04-05 01:23:23.617476 | orchestrator | "epoch": 1, 2026-04-05 01:23:23.617487 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-05 01:23:23.617498 | orchestrator | "modified": "2026-04-05T00:54:03.636537Z", 2026-04-05 01:23:23.617510 | orchestrator | "created": "2026-04-05T00:54:03.636537Z", 2026-04-05 01:23:23.617521 | orchestrator | "min_mon_release": 18, 2026-04-05 01:23:23.617532 | orchestrator | "min_mon_release_name": "reef", 2026-04-05 01:23:23.617542 | orchestrator | "election_strategy": 1, 2026-04-05 01:23:23.617553 | orchestrator | "disallowed_leaders": "", 2026-04-05 01:23:23.617564 | orchestrator | "stretch_mode": false, 2026-04-05 01:23:23.617575 | orchestrator | "tiebreaker_mon": "", 2026-04-05 01:23:23.617585 | orchestrator | "removed_ranks": "", 2026-04-05 01:23:23.617596 | orchestrator | "features": { 2026-04-05 01:23:23.617607 | orchestrator | "persistent": [ 2026-04-05 01:23:23.617618 | orchestrator | "kraken", 2026-04-05 01:23:23.617628 | orchestrator | "luminous", 2026-04-05 01:23:23.617639 | orchestrator | "mimic", 2026-04-05 01:23:23.617649 | orchestrator | "osdmap-prune", 2026-04-05 01:23:23.617660 | orchestrator | "nautilus", 2026-04-05 01:23:23.617671 | orchestrator | "octopus", 2026-04-05 01:23:23.617682 | orchestrator | "pacific", 2026-04-05 01:23:23.617692 | orchestrator | "elector-pinging", 2026-04-05 01:23:23.617703 | orchestrator | "quincy", 2026-04-05 01:23:23.617714 | orchestrator | "reef" 2026-04-05 01:23:23.617725 | orchestrator | ], 2026-04-05 01:23:23.617735 | orchestrator | "optional": [] 2026-04-05 01:23:23.617746 | orchestrator | }, 2026-04-05 01:23:23.617757 | orchestrator | "mons": [ 2026-04-05 01:23:23.617768 | orchestrator | { 2026-04-05 01:23:23.617778 | orchestrator | "rank": 0, 2026-04-05 01:23:23.617789 | orchestrator | "name": "testbed-node-0", 2026-04-05 01:23:23.617800 | orchestrator | "public_addrs": { 2026-04-05 01:23:23.617811 | orchestrator | "addrvec": [ 2026-04-05 01:23:23.617822 | orchestrator | { 2026-04-05 01:23:23.617833 | orchestrator | "type": "v2", 2026-04-05 01:23:23.617843 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-05 01:23:23.617854 | orchestrator | "nonce": 0 2026-04-05 01:23:23.617865 | orchestrator | }, 2026-04-05 01:23:23.617876 | orchestrator | { 2026-04-05 01:23:23.617887 | orchestrator | "type": "v1", 2026-04-05 01:23:23.617897 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-05 01:23:23.617939 | orchestrator | "nonce": 0 2026-04-05 01:23:23.617951 | orchestrator | } 2026-04-05 01:23:23.617961 | orchestrator | ] 2026-04-05 01:23:23.617972 | orchestrator | }, 2026-04-05 01:23:23.617983 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-05 01:23:23.617993 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-05 01:23:23.618004 | orchestrator | "priority": 0, 2026-04-05 01:23:23.618073 | orchestrator | "weight": 0, 2026-04-05 01:23:23.618086 | orchestrator | "crush_location": "{}" 2026-04-05 01:23:23.618097 | orchestrator | }, 2026-04-05 01:23:23.618107 | orchestrator | { 2026-04-05 01:23:23.618118 | orchestrator | "rank": 1, 2026-04-05 01:23:23.618129 | orchestrator | "name": "testbed-node-1", 2026-04-05 01:23:23.618164 | orchestrator | "public_addrs": { 2026-04-05 01:23:23.618175 | orchestrator | "addrvec": [ 2026-04-05 01:23:23.618186 | orchestrator | { 2026-04-05 01:23:23.618197 | orchestrator | "type": "v2", 2026-04-05 01:23:23.618208 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-05 01:23:23.618219 | orchestrator | "nonce": 0 2026-04-05 01:23:23.618230 | orchestrator | }, 2026-04-05 01:23:23.618240 | orchestrator | { 2026-04-05 01:23:23.618259 | orchestrator | "type": "v1", 2026-04-05 01:23:23.618270 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-05 01:23:23.618281 | orchestrator | "nonce": 0 2026-04-05 01:23:23.618292 | orchestrator | } 2026-04-05 01:23:23.618302 | orchestrator | ] 2026-04-05 01:23:23.618313 | orchestrator | }, 2026-04-05 01:23:23.618324 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-05 01:23:23.618334 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-05 01:23:23.618345 | orchestrator | "priority": 0, 2026-04-05 01:23:23.618356 | orchestrator | "weight": 0, 2026-04-05 01:23:23.618367 | orchestrator | "crush_location": "{}" 2026-04-05 01:23:23.618377 | orchestrator | }, 2026-04-05 01:23:23.618388 | orchestrator | { 2026-04-05 01:23:23.618399 | orchestrator | "rank": 2, 2026-04-05 01:23:23.618409 | orchestrator | "name": "testbed-node-2", 2026-04-05 01:23:23.618420 | orchestrator | "public_addrs": { 2026-04-05 01:23:23.618431 | orchestrator | "addrvec": [ 2026-04-05 01:23:23.618442 | orchestrator | { 2026-04-05 01:23:23.618452 | orchestrator | "type": "v2", 2026-04-05 01:23:23.618463 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-05 01:23:23.618474 | orchestrator | "nonce": 0 2026-04-05 01:23:23.618484 | orchestrator | }, 2026-04-05 01:23:23.618495 | orchestrator | { 2026-04-05 01:23:23.618506 | orchestrator | "type": "v1", 2026-04-05 01:23:23.618516 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-05 01:23:23.618527 | orchestrator | "nonce": 0 2026-04-05 01:23:23.618538 | orchestrator | } 2026-04-05 01:23:23.618549 | orchestrator | ] 2026-04-05 01:23:23.618559 | orchestrator | }, 2026-04-05 01:23:23.618570 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-05 01:23:23.618581 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-05 01:23:23.618596 | orchestrator | "priority": 0, 2026-04-05 01:23:23.618614 | orchestrator | "weight": 0, 2026-04-05 01:23:23.618643 | orchestrator | "crush_location": "{}" 2026-04-05 01:23:23.618665 | orchestrator | } 2026-04-05 01:23:23.618683 | orchestrator | ] 2026-04-05 01:23:23.618702 | orchestrator | } 2026-04-05 01:23:23.618721 | orchestrator | } 2026-04-05 01:23:23.618738 | orchestrator | 2026-04-05 01:23:23.618757 | orchestrator | # Ceph free space status 2026-04-05 01:23:23.618777 | orchestrator | 2026-04-05 01:23:23.618796 | orchestrator | + echo 2026-04-05 01:23:23.618814 | orchestrator | + echo '# Ceph free space status' 2026-04-05 01:23:23.618834 | orchestrator | + echo 2026-04-05 01:23:23.618853 | orchestrator | + ceph df 2026-04-05 01:23:24.181782 | orchestrator | --- RAW STORAGE --- 2026-04-05 01:23:24.181953 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-05 01:23:24.181988 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-04-05 01:23:24.181999 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-04-05 01:23:24.182009 | orchestrator | 2026-04-05 01:23:24.182069 | orchestrator | --- POOLS --- 2026-04-05 01:23:24.182081 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-05 01:23:24.182093 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-04-05 01:23:24.182102 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-05 01:23:24.182112 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-05 01:23:24.182122 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-05 01:23:24.182132 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-05 01:23:24.182141 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-05 01:23:24.182151 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-05 01:23:24.182160 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-05 01:23:24.182169 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-04-05 01:23:24.182179 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 01:23:24.182189 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 01:23:24.182198 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2026-04-05 01:23:24.182208 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 01:23:24.182244 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 01:23:24.244800 | orchestrator | ++ semver latest 5.0.0 2026-04-05 01:23:24.303228 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-05 01:23:24.303351 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 01:23:24.303369 | orchestrator | + osism apply facts 2026-04-05 01:23:25.676831 | orchestrator | 2026-04-05 01:23:25 | INFO  | Prepare task for execution of facts. 2026-04-05 01:23:25.747089 | orchestrator | 2026-04-05 01:23:25 | INFO  | Task 7f937cae-faf0-452b-805c-5989ecc5214d (facts) was prepared for execution. 2026-04-05 01:23:25.747207 | orchestrator | 2026-04-05 01:23:25 | INFO  | It takes a moment until task 7f937cae-faf0-452b-805c-5989ecc5214d (facts) has been started and output is visible here. 2026-04-05 01:23:38.383691 | orchestrator | 2026-04-05 01:23:38.383832 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 01:23:38.383862 | orchestrator | 2026-04-05 01:23:38.383881 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 01:23:38.383900 | orchestrator | Sunday 05 April 2026 01:23:29 +0000 (0:00:00.353) 0:00:00.353 ********** 2026-04-05 01:23:38.383982 | orchestrator | ok: [testbed-manager] 2026-04-05 01:23:38.384006 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:23:38.384027 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:23:38.384047 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:23:38.384069 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:23:38.384088 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:23:38.384108 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:23:38.384128 | orchestrator | 2026-04-05 01:23:38.384148 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 01:23:38.384169 | orchestrator | Sunday 05 April 2026 01:23:30 +0000 (0:00:01.378) 0:00:01.731 ********** 2026-04-05 01:23:38.384182 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:23:38.384194 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:23:38.384205 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:23:38.384298 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:23:38.384318 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:23:38.384334 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:23:38.384352 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:23:38.384371 | orchestrator | 2026-04-05 01:23:38.384390 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 01:23:38.384408 | orchestrator | 2026-04-05 01:23:38.384428 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 01:23:38.384446 | orchestrator | Sunday 05 April 2026 01:23:31 +0000 (0:00:01.344) 0:00:03.076 ********** 2026-04-05 01:23:38.384467 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:23:38.384486 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:23:38.384505 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:23:38.384517 | orchestrator | ok: [testbed-manager] 2026-04-05 01:23:38.384528 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:23:38.384539 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:23:38.384550 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:23:38.384561 | orchestrator | 2026-04-05 01:23:38.384572 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 01:23:38.384583 | orchestrator | 2026-04-05 01:23:38.384596 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 01:23:38.384607 | orchestrator | Sunday 05 April 2026 01:23:37 +0000 (0:00:05.327) 0:00:08.403 ********** 2026-04-05 01:23:38.384618 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:23:38.384629 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:23:38.384640 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:23:38.384651 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:23:38.384662 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:23:38.384673 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:23:38.384684 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:23:38.384724 | orchestrator | 2026-04-05 01:23:38.384736 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:23:38.384747 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:23:38.384760 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:23:38.384770 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:23:38.384781 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:23:38.384792 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:23:38.384803 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:23:38.384814 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:23:38.384824 | orchestrator | 2026-04-05 01:23:38.384835 | orchestrator | 2026-04-05 01:23:38.384846 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:23:38.384857 | orchestrator | Sunday 05 April 2026 01:23:38 +0000 (0:00:00.809) 0:00:09.213 ********** 2026-04-05 01:23:38.384882 | orchestrator | =============================================================================== 2026-04-05 01:23:38.384893 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.33s 2026-04-05 01:23:38.384904 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.38s 2026-04-05 01:23:38.384943 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.34s 2026-04-05 01:23:38.384956 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.81s 2026-04-05 01:23:38.613806 | orchestrator | + osism validate ceph-mons 2026-04-05 01:24:10.624716 | orchestrator | 2026-04-05 01:24:10.624828 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-05 01:24:10.624845 | orchestrator | 2026-04-05 01:24:10.624856 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 01:24:10.624868 | orchestrator | Sunday 05 April 2026 01:23:53 +0000 (0:00:00.555) 0:00:00.555 ********** 2026-04-05 01:24:10.624879 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:10.624890 | orchestrator | 2026-04-05 01:24:10.624901 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 01:24:10.624912 | orchestrator | Sunday 05 April 2026 01:23:54 +0000 (0:00:01.056) 0:00:01.612 ********** 2026-04-05 01:24:10.625011 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:10.625027 | orchestrator | 2026-04-05 01:24:10.625038 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 01:24:10.625050 | orchestrator | Sunday 05 April 2026 01:23:55 +0000 (0:00:00.749) 0:00:02.362 ********** 2026-04-05 01:24:10.625061 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.625073 | orchestrator | 2026-04-05 01:24:10.625084 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-05 01:24:10.625095 | orchestrator | Sunday 05 April 2026 01:23:55 +0000 (0:00:00.145) 0:00:02.508 ********** 2026-04-05 01:24:10.625106 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.625118 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:10.625128 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:10.625139 | orchestrator | 2026-04-05 01:24:10.625151 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-05 01:24:10.625162 | orchestrator | Sunday 05 April 2026 01:23:56 +0000 (0:00:00.334) 0:00:02.842 ********** 2026-04-05 01:24:10.625193 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:10.625205 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:10.625217 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.625228 | orchestrator | 2026-04-05 01:24:10.625239 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-05 01:24:10.625252 | orchestrator | Sunday 05 April 2026 01:23:57 +0000 (0:00:01.607) 0:00:04.449 ********** 2026-04-05 01:24:10.625265 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.625278 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:24:10.625291 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:24:10.625304 | orchestrator | 2026-04-05 01:24:10.625317 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-05 01:24:10.625330 | orchestrator | Sunday 05 April 2026 01:23:58 +0000 (0:00:00.309) 0:00:04.759 ********** 2026-04-05 01:24:10.625343 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.625355 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:10.625368 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:10.625381 | orchestrator | 2026-04-05 01:24:10.625394 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:24:10.625406 | orchestrator | Sunday 05 April 2026 01:23:58 +0000 (0:00:00.329) 0:00:05.088 ********** 2026-04-05 01:24:10.625419 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.625433 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:10.625445 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:10.625457 | orchestrator | 2026-04-05 01:24:10.625470 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-05 01:24:10.625482 | orchestrator | Sunday 05 April 2026 01:23:58 +0000 (0:00:00.340) 0:00:05.429 ********** 2026-04-05 01:24:10.625495 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.625508 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:24:10.625521 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:24:10.625533 | orchestrator | 2026-04-05 01:24:10.625546 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-05 01:24:10.625559 | orchestrator | Sunday 05 April 2026 01:23:59 +0000 (0:00:00.463) 0:00:05.892 ********** 2026-04-05 01:24:10.625573 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.625586 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:10.625598 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:10.625611 | orchestrator | 2026-04-05 01:24:10.625622 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:24:10.625633 | orchestrator | Sunday 05 April 2026 01:23:59 +0000 (0:00:00.368) 0:00:06.261 ********** 2026-04-05 01:24:10.625644 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.625655 | orchestrator | 2026-04-05 01:24:10.625666 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:24:10.625677 | orchestrator | Sunday 05 April 2026 01:23:59 +0000 (0:00:00.268) 0:00:06.530 ********** 2026-04-05 01:24:10.625688 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.625698 | orchestrator | 2026-04-05 01:24:10.625709 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:24:10.625720 | orchestrator | Sunday 05 April 2026 01:24:00 +0000 (0:00:00.259) 0:00:06.790 ********** 2026-04-05 01:24:10.625731 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.625742 | orchestrator | 2026-04-05 01:24:10.625753 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:10.625764 | orchestrator | Sunday 05 April 2026 01:24:00 +0000 (0:00:00.299) 0:00:07.089 ********** 2026-04-05 01:24:10.625774 | orchestrator | 2026-04-05 01:24:10.625785 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:10.625796 | orchestrator | Sunday 05 April 2026 01:24:00 +0000 (0:00:00.077) 0:00:07.167 ********** 2026-04-05 01:24:10.625807 | orchestrator | 2026-04-05 01:24:10.625817 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:10.625828 | orchestrator | Sunday 05 April 2026 01:24:00 +0000 (0:00:00.077) 0:00:07.244 ********** 2026-04-05 01:24:10.625846 | orchestrator | 2026-04-05 01:24:10.625857 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:24:10.625868 | orchestrator | Sunday 05 April 2026 01:24:00 +0000 (0:00:00.242) 0:00:07.487 ********** 2026-04-05 01:24:10.625879 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.625890 | orchestrator | 2026-04-05 01:24:10.625901 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-05 01:24:10.625912 | orchestrator | Sunday 05 April 2026 01:24:01 +0000 (0:00:00.284) 0:00:07.772 ********** 2026-04-05 01:24:10.625923 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.625955 | orchestrator | 2026-04-05 01:24:10.625985 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-05 01:24:10.625997 | orchestrator | Sunday 05 April 2026 01:24:01 +0000 (0:00:00.266) 0:00:08.038 ********** 2026-04-05 01:24:10.626008 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.626069 | orchestrator | 2026-04-05 01:24:10.626081 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-05 01:24:10.626092 | orchestrator | Sunday 05 April 2026 01:24:01 +0000 (0:00:00.135) 0:00:08.173 ********** 2026-04-05 01:24:10.626103 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:24:10.626114 | orchestrator | 2026-04-05 01:24:10.626124 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-05 01:24:10.626135 | orchestrator | Sunday 05 April 2026 01:24:03 +0000 (0:00:01.717) 0:00:09.891 ********** 2026-04-05 01:24:10.626146 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.626157 | orchestrator | 2026-04-05 01:24:10.626180 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-05 01:24:10.626202 | orchestrator | Sunday 05 April 2026 01:24:03 +0000 (0:00:00.366) 0:00:10.257 ********** 2026-04-05 01:24:10.626213 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.626224 | orchestrator | 2026-04-05 01:24:10.626235 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-05 01:24:10.626245 | orchestrator | Sunday 05 April 2026 01:24:03 +0000 (0:00:00.148) 0:00:10.406 ********** 2026-04-05 01:24:10.626256 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.626267 | orchestrator | 2026-04-05 01:24:10.626278 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-05 01:24:10.626289 | orchestrator | Sunday 05 April 2026 01:24:04 +0000 (0:00:00.326) 0:00:10.732 ********** 2026-04-05 01:24:10.626300 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.626310 | orchestrator | 2026-04-05 01:24:10.626321 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-05 01:24:10.626351 | orchestrator | Sunday 05 April 2026 01:24:04 +0000 (0:00:00.307) 0:00:11.040 ********** 2026-04-05 01:24:10.626362 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.626373 | orchestrator | 2026-04-05 01:24:10.626384 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-05 01:24:10.626395 | orchestrator | Sunday 05 April 2026 01:24:04 +0000 (0:00:00.118) 0:00:11.158 ********** 2026-04-05 01:24:10.626411 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.626429 | orchestrator | 2026-04-05 01:24:10.626447 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-05 01:24:10.626465 | orchestrator | Sunday 05 April 2026 01:24:04 +0000 (0:00:00.121) 0:00:11.280 ********** 2026-04-05 01:24:10.626483 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.626501 | orchestrator | 2026-04-05 01:24:10.626520 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-05 01:24:10.626538 | orchestrator | Sunday 05 April 2026 01:24:04 +0000 (0:00:00.305) 0:00:11.585 ********** 2026-04-05 01:24:10.626556 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:24:10.626574 | orchestrator | 2026-04-05 01:24:10.626591 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-05 01:24:10.626610 | orchestrator | Sunday 05 April 2026 01:24:06 +0000 (0:00:01.435) 0:00:13.021 ********** 2026-04-05 01:24:10.626628 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.626647 | orchestrator | 2026-04-05 01:24:10.626669 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-05 01:24:10.626680 | orchestrator | Sunday 05 April 2026 01:24:06 +0000 (0:00:00.297) 0:00:13.318 ********** 2026-04-05 01:24:10.626690 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.626701 | orchestrator | 2026-04-05 01:24:10.626712 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-05 01:24:10.626722 | orchestrator | Sunday 05 April 2026 01:24:06 +0000 (0:00:00.165) 0:00:13.484 ********** 2026-04-05 01:24:10.626733 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:10.626743 | orchestrator | 2026-04-05 01:24:10.626754 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-05 01:24:10.626765 | orchestrator | Sunday 05 April 2026 01:24:06 +0000 (0:00:00.140) 0:00:13.625 ********** 2026-04-05 01:24:10.626775 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.626786 | orchestrator | 2026-04-05 01:24:10.626797 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-05 01:24:10.626807 | orchestrator | Sunday 05 April 2026 01:24:07 +0000 (0:00:00.163) 0:00:13.788 ********** 2026-04-05 01:24:10.626818 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.626834 | orchestrator | 2026-04-05 01:24:10.626845 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 01:24:10.626856 | orchestrator | Sunday 05 April 2026 01:24:07 +0000 (0:00:00.143) 0:00:13.932 ********** 2026-04-05 01:24:10.626867 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:10.626878 | orchestrator | 2026-04-05 01:24:10.626888 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 01:24:10.626899 | orchestrator | Sunday 05 April 2026 01:24:07 +0000 (0:00:00.263) 0:00:14.195 ********** 2026-04-05 01:24:10.626909 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:10.626920 | orchestrator | 2026-04-05 01:24:10.626953 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:24:10.626964 | orchestrator | Sunday 05 April 2026 01:24:07 +0000 (0:00:00.263) 0:00:14.458 ********** 2026-04-05 01:24:10.626975 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:10.626986 | orchestrator | 2026-04-05 01:24:10.626997 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:24:10.627014 | orchestrator | Sunday 05 April 2026 01:24:09 +0000 (0:00:01.807) 0:00:16.266 ********** 2026-04-05 01:24:10.627025 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:10.627036 | orchestrator | 2026-04-05 01:24:10.627047 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:24:10.627058 | orchestrator | Sunday 05 April 2026 01:24:09 +0000 (0:00:00.306) 0:00:16.572 ********** 2026-04-05 01:24:10.627069 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:10.627080 | orchestrator | 2026-04-05 01:24:10.627101 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:13.013878 | orchestrator | Sunday 05 April 2026 01:24:10 +0000 (0:00:00.762) 0:00:17.334 ********** 2026-04-05 01:24:13.014007 | orchestrator | 2026-04-05 01:24:13.014077 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:13.014087 | orchestrator | Sunday 05 April 2026 01:24:10 +0000 (0:00:00.071) 0:00:17.406 ********** 2026-04-05 01:24:13.014095 | orchestrator | 2026-04-05 01:24:13.014104 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:13.014112 | orchestrator | Sunday 05 April 2026 01:24:10 +0000 (0:00:00.069) 0:00:17.476 ********** 2026-04-05 01:24:13.014120 | orchestrator | 2026-04-05 01:24:13.014128 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 01:24:13.014137 | orchestrator | Sunday 05 April 2026 01:24:10 +0000 (0:00:00.106) 0:00:17.583 ********** 2026-04-05 01:24:13.014145 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:13.014153 | orchestrator | 2026-04-05 01:24:13.014161 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:24:13.014193 | orchestrator | Sunday 05 April 2026 01:24:12 +0000 (0:00:01.316) 0:00:18.900 ********** 2026-04-05 01:24:13.014202 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-05 01:24:13.014210 | orchestrator |  "msg": [ 2026-04-05 01:24:13.014219 | orchestrator |  "Validator run completed.", 2026-04-05 01:24:13.014228 | orchestrator |  "You can find the report file here:", 2026-04-05 01:24:13.014236 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-05T01:23:54+00:00-report.json", 2026-04-05 01:24:13.014245 | orchestrator |  "on the following host:", 2026-04-05 01:24:13.014253 | orchestrator |  "testbed-manager" 2026-04-05 01:24:13.014261 | orchestrator |  ] 2026-04-05 01:24:13.014269 | orchestrator | } 2026-04-05 01:24:13.014278 | orchestrator | 2026-04-05 01:24:13.014286 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:24:13.014295 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 01:24:13.014305 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:24:13.014313 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:24:13.014321 | orchestrator | 2026-04-05 01:24:13.014329 | orchestrator | 2026-04-05 01:24:13.014337 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:24:13.014344 | orchestrator | Sunday 05 April 2026 01:24:12 +0000 (0:00:00.498) 0:00:19.399 ********** 2026-04-05 01:24:13.014352 | orchestrator | =============================================================================== 2026-04-05 01:24:13.014360 | orchestrator | Aggregate test results step one ----------------------------------------- 1.81s 2026-04-05 01:24:13.014368 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.72s 2026-04-05 01:24:13.014376 | orchestrator | Get container info ------------------------------------------------------ 1.61s 2026-04-05 01:24:13.014384 | orchestrator | Gather status data ------------------------------------------------------ 1.44s 2026-04-05 01:24:13.014391 | orchestrator | Write report file ------------------------------------------------------- 1.32s 2026-04-05 01:24:13.014399 | orchestrator | Get timestamp for report file ------------------------------------------- 1.06s 2026-04-05 01:24:13.014407 | orchestrator | Aggregate test results step three --------------------------------------- 0.76s 2026-04-05 01:24:13.014414 | orchestrator | Create report output directory ------------------------------------------ 0.75s 2026-04-05 01:24:13.014422 | orchestrator | Print report file information ------------------------------------------- 0.50s 2026-04-05 01:24:13.014431 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.46s 2026-04-05 01:24:13.014440 | orchestrator | Flush handlers ---------------------------------------------------------- 0.40s 2026-04-05 01:24:13.014450 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.37s 2026-04-05 01:24:13.014459 | orchestrator | Set quorum test data ---------------------------------------------------- 0.37s 2026-04-05 01:24:13.014469 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-04-05 01:24:13.014478 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2026-04-05 01:24:13.014488 | orchestrator | Set test result to passed if container is existing ---------------------- 0.33s 2026-04-05 01:24:13.014497 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-04-05 01:24:13.014506 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-04-05 01:24:13.014515 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-04-05 01:24:13.014524 | orchestrator | Aggregate test results step two ----------------------------------------- 0.31s 2026-04-05 01:24:13.217600 | orchestrator | + osism validate ceph-mgrs 2026-04-05 01:24:43.156582 | orchestrator | 2026-04-05 01:24:43.156731 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-05 01:24:43.156764 | orchestrator | 2026-04-05 01:24:43.156781 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 01:24:43.156793 | orchestrator | Sunday 05 April 2026 01:24:28 +0000 (0:00:00.585) 0:00:00.585 ********** 2026-04-05 01:24:43.156805 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:43.156816 | orchestrator | 2026-04-05 01:24:43.156827 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 01:24:43.156838 | orchestrator | Sunday 05 April 2026 01:24:29 +0000 (0:00:01.056) 0:00:01.642 ********** 2026-04-05 01:24:43.156849 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:43.156860 | orchestrator | 2026-04-05 01:24:43.156871 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 01:24:43.156882 | orchestrator | Sunday 05 April 2026 01:24:30 +0000 (0:00:00.758) 0:00:02.400 ********** 2026-04-05 01:24:43.156893 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:43.156905 | orchestrator | 2026-04-05 01:24:43.156916 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-05 01:24:43.156927 | orchestrator | Sunday 05 April 2026 01:24:30 +0000 (0:00:00.136) 0:00:02.537 ********** 2026-04-05 01:24:43.156938 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:43.156978 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:43.156992 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:43.157003 | orchestrator | 2026-04-05 01:24:43.157014 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-05 01:24:43.157024 | orchestrator | Sunday 05 April 2026 01:24:30 +0000 (0:00:00.329) 0:00:02.866 ********** 2026-04-05 01:24:43.157035 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:43.157046 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:43.157057 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:43.157070 | orchestrator | 2026-04-05 01:24:43.157089 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-05 01:24:43.157106 | orchestrator | Sunday 05 April 2026 01:24:32 +0000 (0:00:01.538) 0:00:04.404 ********** 2026-04-05 01:24:43.157125 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:43.157144 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:24:43.157164 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:24:43.157184 | orchestrator | 2026-04-05 01:24:43.157202 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-05 01:24:43.157221 | orchestrator | Sunday 05 April 2026 01:24:32 +0000 (0:00:00.305) 0:00:04.709 ********** 2026-04-05 01:24:43.157240 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:43.157261 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:43.157273 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:43.157285 | orchestrator | 2026-04-05 01:24:43.157298 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:24:43.157311 | orchestrator | Sunday 05 April 2026 01:24:32 +0000 (0:00:00.309) 0:00:05.019 ********** 2026-04-05 01:24:43.157323 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:43.157335 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:43.157348 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:43.157360 | orchestrator | 2026-04-05 01:24:43.157373 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-05 01:24:43.157385 | orchestrator | Sunday 05 April 2026 01:24:33 +0000 (0:00:00.336) 0:00:05.356 ********** 2026-04-05 01:24:43.157397 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:43.157409 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:24:43.157421 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:24:43.157434 | orchestrator | 2026-04-05 01:24:43.157446 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-05 01:24:43.157458 | orchestrator | Sunday 05 April 2026 01:24:33 +0000 (0:00:00.476) 0:00:05.832 ********** 2026-04-05 01:24:43.157494 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:43.157506 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:43.157517 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:43.157528 | orchestrator | 2026-04-05 01:24:43.157538 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:24:43.157549 | orchestrator | Sunday 05 April 2026 01:24:33 +0000 (0:00:00.314) 0:00:06.147 ********** 2026-04-05 01:24:43.157560 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:43.157571 | orchestrator | 2026-04-05 01:24:43.157653 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:24:43.157666 | orchestrator | Sunday 05 April 2026 01:24:34 +0000 (0:00:00.259) 0:00:06.406 ********** 2026-04-05 01:24:43.157676 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:43.157687 | orchestrator | 2026-04-05 01:24:43.157698 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:24:43.157708 | orchestrator | Sunday 05 April 2026 01:24:34 +0000 (0:00:00.298) 0:00:06.705 ********** 2026-04-05 01:24:43.157719 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:43.157730 | orchestrator | 2026-04-05 01:24:43.157741 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:43.157751 | orchestrator | Sunday 05 April 2026 01:24:34 +0000 (0:00:00.243) 0:00:06.948 ********** 2026-04-05 01:24:43.157762 | orchestrator | 2026-04-05 01:24:43.157773 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:43.157783 | orchestrator | Sunday 05 April 2026 01:24:34 +0000 (0:00:00.076) 0:00:07.025 ********** 2026-04-05 01:24:43.157794 | orchestrator | 2026-04-05 01:24:43.157805 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:43.157815 | orchestrator | Sunday 05 April 2026 01:24:34 +0000 (0:00:00.068) 0:00:07.093 ********** 2026-04-05 01:24:43.157826 | orchestrator | 2026-04-05 01:24:43.157836 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:24:43.157847 | orchestrator | Sunday 05 April 2026 01:24:35 +0000 (0:00:00.233) 0:00:07.327 ********** 2026-04-05 01:24:43.157858 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:43.157869 | orchestrator | 2026-04-05 01:24:43.157879 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-05 01:24:43.157890 | orchestrator | Sunday 05 April 2026 01:24:35 +0000 (0:00:00.249) 0:00:07.576 ********** 2026-04-05 01:24:43.157901 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:43.157912 | orchestrator | 2026-04-05 01:24:43.157982 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-05 01:24:43.157997 | orchestrator | Sunday 05 April 2026 01:24:35 +0000 (0:00:00.266) 0:00:07.843 ********** 2026-04-05 01:24:43.158008 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:43.158100 | orchestrator | 2026-04-05 01:24:43.158121 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-05 01:24:43.158140 | orchestrator | Sunday 05 April 2026 01:24:35 +0000 (0:00:00.148) 0:00:07.991 ********** 2026-04-05 01:24:43.158159 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:24:43.158177 | orchestrator | 2026-04-05 01:24:43.158192 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-05 01:24:43.158203 | orchestrator | Sunday 05 April 2026 01:24:37 +0000 (0:00:01.787) 0:00:09.778 ********** 2026-04-05 01:24:43.158214 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:43.158225 | orchestrator | 2026-04-05 01:24:43.158235 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-05 01:24:43.158247 | orchestrator | Sunday 05 April 2026 01:24:37 +0000 (0:00:00.263) 0:00:10.042 ********** 2026-04-05 01:24:43.158257 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:43.158268 | orchestrator | 2026-04-05 01:24:43.158279 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-05 01:24:43.158290 | orchestrator | Sunday 05 April 2026 01:24:38 +0000 (0:00:00.326) 0:00:10.368 ********** 2026-04-05 01:24:43.158301 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:43.158325 | orchestrator | 2026-04-05 01:24:43.158336 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-05 01:24:43.158346 | orchestrator | Sunday 05 April 2026 01:24:38 +0000 (0:00:00.135) 0:00:10.503 ********** 2026-04-05 01:24:43.158357 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:43.158368 | orchestrator | 2026-04-05 01:24:43.158378 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 01:24:43.158389 | orchestrator | Sunday 05 April 2026 01:24:38 +0000 (0:00:00.165) 0:00:10.668 ********** 2026-04-05 01:24:43.158399 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:43.158410 | orchestrator | 2026-04-05 01:24:43.158421 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 01:24:43.158432 | orchestrator | Sunday 05 April 2026 01:24:38 +0000 (0:00:00.263) 0:00:10.932 ********** 2026-04-05 01:24:43.158442 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:43.158453 | orchestrator | 2026-04-05 01:24:43.158464 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:24:43.158475 | orchestrator | Sunday 05 April 2026 01:24:38 +0000 (0:00:00.253) 0:00:11.185 ********** 2026-04-05 01:24:43.158485 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:43.158496 | orchestrator | 2026-04-05 01:24:43.158507 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:24:43.158517 | orchestrator | Sunday 05 April 2026 01:24:40 +0000 (0:00:01.640) 0:00:12.826 ********** 2026-04-05 01:24:43.158528 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:43.158539 | orchestrator | 2026-04-05 01:24:43.158549 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:24:43.158560 | orchestrator | Sunday 05 April 2026 01:24:40 +0000 (0:00:00.274) 0:00:13.100 ********** 2026-04-05 01:24:43.158571 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:43.158582 | orchestrator | 2026-04-05 01:24:43.158592 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:43.158603 | orchestrator | Sunday 05 April 2026 01:24:41 +0000 (0:00:00.261) 0:00:13.362 ********** 2026-04-05 01:24:43.158614 | orchestrator | 2026-04-05 01:24:43.158624 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:43.158635 | orchestrator | Sunday 05 April 2026 01:24:41 +0000 (0:00:00.087) 0:00:13.449 ********** 2026-04-05 01:24:43.158646 | orchestrator | 2026-04-05 01:24:43.158657 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:24:43.158667 | orchestrator | Sunday 05 April 2026 01:24:41 +0000 (0:00:00.075) 0:00:13.524 ********** 2026-04-05 01:24:43.158678 | orchestrator | 2026-04-05 01:24:43.158689 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 01:24:43.158700 | orchestrator | Sunday 05 April 2026 01:24:41 +0000 (0:00:00.075) 0:00:13.600 ********** 2026-04-05 01:24:43.158711 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:24:43.158721 | orchestrator | 2026-04-05 01:24:43.158732 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:24:43.158743 | orchestrator | Sunday 05 April 2026 01:24:42 +0000 (0:00:01.350) 0:00:14.951 ********** 2026-04-05 01:24:43.158754 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-05 01:24:43.158764 | orchestrator |  "msg": [ 2026-04-05 01:24:43.158777 | orchestrator |  "Validator run completed.", 2026-04-05 01:24:43.158788 | orchestrator |  "You can find the report file here:", 2026-04-05 01:24:43.158799 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-05T01:24:29+00:00-report.json", 2026-04-05 01:24:43.158811 | orchestrator |  "on the following host:", 2026-04-05 01:24:43.158822 | orchestrator |  "testbed-manager" 2026-04-05 01:24:43.158833 | orchestrator |  ] 2026-04-05 01:24:43.158844 | orchestrator | } 2026-04-05 01:24:43.158855 | orchestrator | 2026-04-05 01:24:43.158866 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:24:43.158885 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 01:24:43.158898 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:24:43.158927 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:24:43.528079 | orchestrator | 2026-04-05 01:24:43.528186 | orchestrator | 2026-04-05 01:24:43.528202 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:24:43.528216 | orchestrator | Sunday 05 April 2026 01:24:43 +0000 (0:00:00.399) 0:00:15.351 ********** 2026-04-05 01:24:43.528228 | orchestrator | =============================================================================== 2026-04-05 01:24:43.528239 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.79s 2026-04-05 01:24:43.528250 | orchestrator | Aggregate test results step one ----------------------------------------- 1.64s 2026-04-05 01:24:43.528261 | orchestrator | Get container info ------------------------------------------------------ 1.54s 2026-04-05 01:24:43.528271 | orchestrator | Write report file ------------------------------------------------------- 1.35s 2026-04-05 01:24:43.528282 | orchestrator | Get timestamp for report file ------------------------------------------- 1.06s 2026-04-05 01:24:43.528293 | orchestrator | Create report output directory ------------------------------------------ 0.76s 2026-04-05 01:24:43.528303 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.48s 2026-04-05 01:24:43.528315 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-04-05 01:24:43.528326 | orchestrator | Flush handlers ---------------------------------------------------------- 0.38s 2026-04-05 01:24:43.528336 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-04-05 01:24:43.528348 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2026-04-05 01:24:43.528358 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.33s 2026-04-05 01:24:43.528369 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2026-04-05 01:24:43.528380 | orchestrator | Set test result to passed if container is existing ---------------------- 0.31s 2026-04-05 01:24:43.528391 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-04-05 01:24:43.528402 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2026-04-05 01:24:43.528413 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-04-05 01:24:43.528424 | orchestrator | Fail due to missing containers ------------------------------------------ 0.27s 2026-04-05 01:24:43.528435 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.26s 2026-04-05 01:24:43.528446 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2026-04-05 01:24:43.727490 | orchestrator | + osism validate ceph-osds 2026-04-05 01:25:03.210811 | orchestrator | 2026-04-05 01:25:03.210891 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-05 01:25:03.210898 | orchestrator | 2026-04-05 01:25:03.210902 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 01:25:03.210907 | orchestrator | Sunday 05 April 2026 01:24:58 +0000 (0:00:00.527) 0:00:00.527 ********** 2026-04-05 01:25:03.210912 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:25:03.210916 | orchestrator | 2026-04-05 01:25:03.210921 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 01:25:03.210925 | orchestrator | Sunday 05 April 2026 01:24:59 +0000 (0:00:01.022) 0:00:01.549 ********** 2026-04-05 01:25:03.210929 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:25:03.210933 | orchestrator | 2026-04-05 01:25:03.210952 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 01:25:03.210956 | orchestrator | Sunday 05 April 2026 01:25:00 +0000 (0:00:00.268) 0:00:01.818 ********** 2026-04-05 01:25:03.210960 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:25:03.210991 | orchestrator | 2026-04-05 01:25:03.210997 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 01:25:03.211003 | orchestrator | Sunday 05 April 2026 01:25:00 +0000 (0:00:00.709) 0:00:02.527 ********** 2026-04-05 01:25:03.211009 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:03.211017 | orchestrator | 2026-04-05 01:25:03.211023 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-05 01:25:03.211029 | orchestrator | Sunday 05 April 2026 01:25:01 +0000 (0:00:00.134) 0:00:02.662 ********** 2026-04-05 01:25:03.211036 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:03.211041 | orchestrator | 2026-04-05 01:25:03.211045 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-05 01:25:03.211049 | orchestrator | Sunday 05 April 2026 01:25:01 +0000 (0:00:00.157) 0:00:02.819 ********** 2026-04-05 01:25:03.211053 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:03.211057 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:25:03.211061 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:25:03.211064 | orchestrator | 2026-04-05 01:25:03.211068 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-05 01:25:03.211072 | orchestrator | Sunday 05 April 2026 01:25:01 +0000 (0:00:00.508) 0:00:03.328 ********** 2026-04-05 01:25:03.211076 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:03.211080 | orchestrator | 2026-04-05 01:25:03.211084 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-05 01:25:03.211087 | orchestrator | Sunday 05 April 2026 01:25:01 +0000 (0:00:00.158) 0:00:03.487 ********** 2026-04-05 01:25:03.211091 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:03.211095 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:03.211099 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:03.211103 | orchestrator | 2026-04-05 01:25:03.211106 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-05 01:25:03.211110 | orchestrator | Sunday 05 April 2026 01:25:02 +0000 (0:00:00.387) 0:00:03.875 ********** 2026-04-05 01:25:03.211114 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:03.211118 | orchestrator | 2026-04-05 01:25:03.211122 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:25:03.211126 | orchestrator | Sunday 05 April 2026 01:25:02 +0000 (0:00:00.364) 0:00:04.239 ********** 2026-04-05 01:25:03.211130 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:03.211133 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:03.211137 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:03.211141 | orchestrator | 2026-04-05 01:25:03.211145 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-05 01:25:03.211149 | orchestrator | Sunday 05 April 2026 01:25:02 +0000 (0:00:00.322) 0:00:04.562 ********** 2026-04-05 01:25:03.211155 | orchestrator | skipping: [testbed-node-3] => (item={'id': '700d51bf6d30525d10026e3a1e624d8e70e815805481ce23da078ce520844dc9', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-05 01:25:03.211162 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7045cc6ba9ce9c19975ead6b0b7ecb23157758fbef1085c6698463d8d180103d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-05 01:25:03.211168 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f3ef5a42391115c803a6ac2338c74714bcfe4ffe01229064ea418d265d99c6e8', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-05 01:25:03.211173 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1e780a8e4958fa08a78c19cecbe7b82095b4f9bee2a99207693c12125513faa7', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-04-05 01:25:03.211190 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ba57fe0c39cb3cc6e8b4741fa996475304502354734fc6bc44f4fc97b1929fa3', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-05 01:25:03.211205 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3d5a69d11c3592f84ee74604a5c286ca95cca121aa5ba51e0e4aee407eab20a0', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2026-04-05 01:25:03.211209 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ddbaca60e089b69c48af2d7ccb58b6cdb150faed0a6880ac63e2d65e7c05d0b0', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-04-05 01:25:03.211215 | orchestrator | skipping: [testbed-node-3] => (item={'id': '015cc86726b23266991a69e71ce288a0048ee4967eb1e4b9a04b5999da0179a9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-05 01:25:03.211219 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b09408021761f0f8510e8eab6ea89c0f3aec1891fcc4a8782b9ac5d4454d0772', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-05 01:25:03.211223 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9c4409e0b84ce2217896edc44d6db0f65ac54a05f8fd879dab86950d175afc73', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-05 01:25:03.211227 | orchestrator | ok: [testbed-node-3] => (item={'id': 'aac6e19137e376796225e9a296529274feddcfdac15cf4b1cc52e4db3278aecf', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:25:03.211231 | orchestrator | ok: [testbed-node-3] => (item={'id': 'beaf19210e9c276502299fc81cec6025e96fd7ec08239c51ea4ebdfb7d41487b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:25:03.211235 | orchestrator | skipping: [testbed-node-3] => (item={'id': '47fbd2781fc8f335270d0a56fd4057f3897dcb577a37ffefed126a178b98c91e', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-05 01:25:03.211246 | orchestrator | skipping: [testbed-node-3] => (item={'id': '191b05d40efe413b12b30b1aa558d3c30f2280ae13950d7bc098cc635de20115', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-05 01:25:03.211253 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0c268cfd2a2152c8ade63fd782e68a8c6e252bd26a27bbfcf82ccd307c3cc692', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-05 01:25:03.211257 | orchestrator | skipping: [testbed-node-3] => (item={'id': '72cb7fb50c229f8086a7e32cf41f98104c3bb27ed204ac154fc1323f2e6ca939', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-05 01:25:03.211261 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd4e126cb20cfa1e48d3428ae60fda0dd5013286c3d598c5981af7e798861ea9a', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-05 01:25:03.211265 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0d507c7f8e9b439d8f9f1075bcacf9df899572a4da80c56c3097da96b416fe10', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-05 01:25:03.211273 | orchestrator | skipping: [testbed-node-4] => (item={'id': '655d2b146e033736a90a4773d2355b9823abc0af66926feb346c32973326890a', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-05 01:25:03.211277 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f101a2fcce8a567aec2c15e4734c3239360512734679db87b942545fbaae764d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-05 01:25:03.211281 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c4682b71595fc18c63a4ffac782173fabbae4ce6142fde5a0fef6be348fc51d4', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-05 01:25:03.211289 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e408456611d25706e0a2af83e4f05f5f93a27a29e5eac0d24753d49b9679e024', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-04-05 01:25:03.425417 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a8700a863df0fabb6d362bcca87fe2528fc40132d7ed95d447eb533c88d409d8', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-05 01:25:03.425544 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9f76210ec56bd6137f805ad828d9e02c548d22d321c1bf4eef4fbe3e1458cb60', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2026-04-05 01:25:03.425569 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2b516e7adb452130572be55f1c03a52ea443a0e74a17dfc2934a17ea0131b06d', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-04-05 01:25:03.425588 | orchestrator | skipping: [testbed-node-4] => (item={'id': '928752cf67ae29e4efca6a97d26a1fac0e34c7a31f89a5517851ff6523c4fdcf', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-05 01:25:03.425606 | orchestrator | skipping: [testbed-node-4] => (item={'id': '06c26e02498f416f47be343ce0b1d69d636f9f10b02d3b240f585ebdd5745ec8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-05 01:25:03.425620 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a9e67596652c6adfa822bdc4e72bdc33297bcf304ea31edce22cff389b1c6a20', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-05 01:25:03.425632 | orchestrator | ok: [testbed-node-4] => (item={'id': '72bcbbc56725498ff53d7886db048aa8289b9b4e736e53b2ce1496445d12348f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:25:03.425659 | orchestrator | ok: [testbed-node-4] => (item={'id': '8bed5a3ffafec22a429d07c04fea78f274306c3daeb86d9aa8e08465c4c28fc3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:25:03.425670 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'af578da17455202633a921ba56c9d9b17c004eccda4140052686817e21d7aef8', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-05 01:25:03.425680 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7261aedef763f4999a51909697330ef417eb39fdec40f0aea68ac7f7be4c5569', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-05 01:25:03.425710 | orchestrator | skipping: [testbed-node-4] => (item={'id': '13cbd46856e39ce9aa3873a50a5fe476ec584b325acd5eab6abcc71478d2394e', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-05 01:25:03.425721 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5a983f555d331334d9f38f0384d496ae5183b871e6127ba325b13a777b3eef64', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-05 01:25:03.425731 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c357fbde6b835e5a0c814e4644d54743a9702e19a4a0494344ce49481f7fce11', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-05 01:25:03.425741 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1d2f13a1225a842278b5274261d270f82957e9bd6bb406f4c0f0923a826a65f6', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-05 01:25:03.425751 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6ac65a52d8703052387f52cee57bd8728579e20e5f4572328346768ef5a0d03e', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-05 01:25:03.425781 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b51ad728a8e57a42db1b452e7145e448be0d133b476053f0e543d4645c8a2fda', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-05 01:25:03.425792 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7b184d3ba208d41d01b0f1954957489d0e43d73960617eae1d63354d22c101d2', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-05 01:25:03.425802 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2954914e17864b59981187ad7ee2f189805064fc3bc4f3a6a5dde58afa38514d', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-04-05 01:25:03.425812 | orchestrator | skipping: [testbed-node-5] => (item={'id': '08b36b68ea37e7c6e57246271c3f33619f6357f2d22b6fa403d0b45d52011dfb', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-05 01:25:03.425822 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5c87f108c46ad4fed6c5415e4d8ca942bf2f671c0b1ae9e7156b8e7e82866be1', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2026-04-05 01:25:03.425832 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f083ccfd6aea7d3fa3dabcd41da61f3ff0a72388b6d80885b20483407aae059d', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-04-05 01:25:03.425842 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2592258aa1f17ce52c1996d20a6609812bb85deaba1cbc2595a0ac48c61f262a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-05 01:25:03.425852 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f2bc0c7fa324b6e34835db54b33908082b639b8f3bd7e55ac595f7265672b9f0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-05 01:25:03.425867 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a650797efb70e715810500124ead34d1fcf8f73a867e8104ce73b6465e21dddb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-05 01:25:03.425885 | orchestrator | ok: [testbed-node-5] => (item={'id': 'a2e6b69ded0da61c67e124179b0c6345268928b668077ab65da6e1390745f186', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:25:03.425895 | orchestrator | ok: [testbed-node-5] => (item={'id': '73bb70c20b32620fb1322556015232f3479434896d0c7cfce0fb3f86d6779e64', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:25:03.425905 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e0e10f86fe2c5c65bef9f1a646c7f17e5347f3e66adda44aaefdad6f57678af9', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-05 01:25:03.425915 | orchestrator | skipping: [testbed-node-5] => (item={'id': '49b7b5a8ac0de525d1a4106840bb78e3955ff05f5062333365dcddb65fdc82c5', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-05 01:25:03.425925 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e01508f9605b37f2374864cbfdbe7ef08fb0fb9ae8865fb86e8f7bfee75807d9', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-05 01:25:03.425934 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aaf90dc2d1f725e9f5166a401be956850b5c3857385b322a5c7f6de475c31a01', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-05 01:25:03.425944 | orchestrator | skipping: [testbed-node-5] => (item={'id': '039620c101386e63bb4f711d56a6f1f245d9e8f6c6cd65489b4dc9fccc9950c2', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-05 01:25:03.425988 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f8e002800a0fdee1a68996cc16484613dfdffac299cd26291a4c56bee6d6d521', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-05 01:25:17.360900 | orchestrator | 2026-04-05 01:25:17.361089 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-05 01:25:17.361111 | orchestrator | Sunday 05 April 2026 01:25:03 +0000 (0:00:00.709) 0:00:05.272 ********** 2026-04-05 01:25:17.361124 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.361923 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:17.361956 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:17.361970 | orchestrator | 2026-04-05 01:25:17.362090 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-05 01:25:17.362106 | orchestrator | Sunday 05 April 2026 01:25:04 +0000 (0:00:00.324) 0:00:05.596 ********** 2026-04-05 01:25:17.362117 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.362129 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:25:17.362142 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:25:17.362161 | orchestrator | 2026-04-05 01:25:17.362193 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-05 01:25:17.362211 | orchestrator | Sunday 05 April 2026 01:25:04 +0000 (0:00:00.305) 0:00:05.901 ********** 2026-04-05 01:25:17.362228 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.362246 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:17.362264 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:17.362282 | orchestrator | 2026-04-05 01:25:17.362302 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:25:17.362321 | orchestrator | Sunday 05 April 2026 01:25:04 +0000 (0:00:00.329) 0:00:06.231 ********** 2026-04-05 01:25:17.362369 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.362381 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:17.362391 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:17.362402 | orchestrator | 2026-04-05 01:25:17.362413 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-05 01:25:17.362424 | orchestrator | Sunday 05 April 2026 01:25:05 +0000 (0:00:00.493) 0:00:06.725 ********** 2026-04-05 01:25:17.362435 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-05 01:25:17.362447 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-05 01:25:17.362458 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.362468 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-05 01:25:17.362479 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-05 01:25:17.362490 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:25:17.362500 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-05 01:25:17.362512 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-05 01:25:17.362522 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:25:17.362533 | orchestrator | 2026-04-05 01:25:17.362544 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-05 01:25:17.362555 | orchestrator | Sunday 05 April 2026 01:25:05 +0000 (0:00:00.344) 0:00:07.069 ********** 2026-04-05 01:25:17.362566 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.362577 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:17.362587 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:17.362598 | orchestrator | 2026-04-05 01:25:17.362608 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-05 01:25:17.362619 | orchestrator | Sunday 05 April 2026 01:25:05 +0000 (0:00:00.323) 0:00:07.392 ********** 2026-04-05 01:25:17.362630 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.362640 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:25:17.362651 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:25:17.362662 | orchestrator | 2026-04-05 01:25:17.362673 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-05 01:25:17.362683 | orchestrator | Sunday 05 April 2026 01:25:06 +0000 (0:00:00.317) 0:00:07.710 ********** 2026-04-05 01:25:17.362694 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.362705 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:25:17.362715 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:25:17.362726 | orchestrator | 2026-04-05 01:25:17.362737 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-05 01:25:17.362747 | orchestrator | Sunday 05 April 2026 01:25:06 +0000 (0:00:00.493) 0:00:08.203 ********** 2026-04-05 01:25:17.362807 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.362820 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:17.362830 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:17.362841 | orchestrator | 2026-04-05 01:25:17.362852 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:25:17.362862 | orchestrator | Sunday 05 April 2026 01:25:06 +0000 (0:00:00.351) 0:00:08.554 ********** 2026-04-05 01:25:17.362873 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.362884 | orchestrator | 2026-04-05 01:25:17.362895 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:25:17.362905 | orchestrator | Sunday 05 April 2026 01:25:07 +0000 (0:00:00.267) 0:00:08.822 ********** 2026-04-05 01:25:17.362916 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.362927 | orchestrator | 2026-04-05 01:25:17.362937 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:25:17.362948 | orchestrator | Sunday 05 April 2026 01:25:07 +0000 (0:00:00.277) 0:00:09.099 ********** 2026-04-05 01:25:17.362968 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.363023 | orchestrator | 2026-04-05 01:25:17.363035 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:25:17.363046 | orchestrator | Sunday 05 April 2026 01:25:07 +0000 (0:00:00.275) 0:00:09.374 ********** 2026-04-05 01:25:17.363056 | orchestrator | 2026-04-05 01:25:17.363067 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:25:17.363078 | orchestrator | Sunday 05 April 2026 01:25:07 +0000 (0:00:00.071) 0:00:09.446 ********** 2026-04-05 01:25:17.363089 | orchestrator | 2026-04-05 01:25:17.363100 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:25:17.363134 | orchestrator | Sunday 05 April 2026 01:25:07 +0000 (0:00:00.070) 0:00:09.516 ********** 2026-04-05 01:25:17.363145 | orchestrator | 2026-04-05 01:25:17.363156 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:25:17.363166 | orchestrator | Sunday 05 April 2026 01:25:08 +0000 (0:00:00.072) 0:00:09.588 ********** 2026-04-05 01:25:17.363177 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.363188 | orchestrator | 2026-04-05 01:25:17.363198 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-05 01:25:17.363210 | orchestrator | Sunday 05 April 2026 01:25:08 +0000 (0:00:00.671) 0:00:10.260 ********** 2026-04-05 01:25:17.363230 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.363248 | orchestrator | 2026-04-05 01:25:17.363267 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:25:17.363284 | orchestrator | Sunday 05 April 2026 01:25:08 +0000 (0:00:00.259) 0:00:10.520 ********** 2026-04-05 01:25:17.363300 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.363318 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:17.363336 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:17.363355 | orchestrator | 2026-04-05 01:25:17.363374 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-05 01:25:17.363394 | orchestrator | Sunday 05 April 2026 01:25:09 +0000 (0:00:00.321) 0:00:10.841 ********** 2026-04-05 01:25:17.363412 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.363432 | orchestrator | 2026-04-05 01:25:17.363444 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-05 01:25:17.363455 | orchestrator | Sunday 05 April 2026 01:25:09 +0000 (0:00:00.250) 0:00:11.091 ********** 2026-04-05 01:25:17.363466 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:25:17.363476 | orchestrator | 2026-04-05 01:25:17.363487 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-05 01:25:17.363498 | orchestrator | Sunday 05 April 2026 01:25:11 +0000 (0:00:02.072) 0:00:13.164 ********** 2026-04-05 01:25:17.363508 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.363519 | orchestrator | 2026-04-05 01:25:17.363530 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-05 01:25:17.363540 | orchestrator | Sunday 05 April 2026 01:25:11 +0000 (0:00:00.140) 0:00:13.304 ********** 2026-04-05 01:25:17.363551 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.363562 | orchestrator | 2026-04-05 01:25:17.363572 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-05 01:25:17.363583 | orchestrator | Sunday 05 April 2026 01:25:12 +0000 (0:00:00.319) 0:00:13.624 ********** 2026-04-05 01:25:17.363594 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.363604 | orchestrator | 2026-04-05 01:25:17.363615 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-05 01:25:17.363633 | orchestrator | Sunday 05 April 2026 01:25:12 +0000 (0:00:00.112) 0:00:13.737 ********** 2026-04-05 01:25:17.363645 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.363655 | orchestrator | 2026-04-05 01:25:17.363666 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:25:17.363677 | orchestrator | Sunday 05 April 2026 01:25:12 +0000 (0:00:00.170) 0:00:13.907 ********** 2026-04-05 01:25:17.363687 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.363709 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:17.363720 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:17.363731 | orchestrator | 2026-04-05 01:25:17.363741 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-05 01:25:17.363752 | orchestrator | Sunday 05 April 2026 01:25:12 +0000 (0:00:00.518) 0:00:14.426 ********** 2026-04-05 01:25:17.363763 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:25:17.363774 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:25:17.363785 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:25:17.363796 | orchestrator | 2026-04-05 01:25:17.363806 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-05 01:25:17.363818 | orchestrator | Sunday 05 April 2026 01:25:14 +0000 (0:00:01.770) 0:00:16.196 ********** 2026-04-05 01:25:17.363828 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.363839 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:17.363850 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:17.363861 | orchestrator | 2026-04-05 01:25:17.363871 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-05 01:25:17.363882 | orchestrator | Sunday 05 April 2026 01:25:14 +0000 (0:00:00.314) 0:00:16.510 ********** 2026-04-05 01:25:17.363893 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.363903 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:17.363914 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:17.363925 | orchestrator | 2026-04-05 01:25:17.363936 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-05 01:25:17.363947 | orchestrator | Sunday 05 April 2026 01:25:15 +0000 (0:00:00.973) 0:00:17.484 ********** 2026-04-05 01:25:17.363958 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.363968 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:25:17.364005 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:25:17.364016 | orchestrator | 2026-04-05 01:25:17.364027 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-05 01:25:17.364038 | orchestrator | Sunday 05 April 2026 01:25:16 +0000 (0:00:00.330) 0:00:17.814 ********** 2026-04-05 01:25:17.364049 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:17.364060 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:17.364071 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:17.364082 | orchestrator | 2026-04-05 01:25:17.364092 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-05 01:25:17.364103 | orchestrator | Sunday 05 April 2026 01:25:16 +0000 (0:00:00.319) 0:00:18.134 ********** 2026-04-05 01:25:17.364114 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.364125 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:25:17.364136 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:25:17.364147 | orchestrator | 2026-04-05 01:25:17.364157 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-05 01:25:17.364168 | orchestrator | Sunday 05 April 2026 01:25:16 +0000 (0:00:00.292) 0:00:18.426 ********** 2026-04-05 01:25:17.364179 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:17.364190 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:25:17.364201 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:25:17.364211 | orchestrator | 2026-04-05 01:25:17.364231 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:25:25.244054 | orchestrator | Sunday 05 April 2026 01:25:17 +0000 (0:00:00.509) 0:00:18.936 ********** 2026-04-05 01:25:25.244147 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:25.244157 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:25.244165 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:25.244171 | orchestrator | 2026-04-05 01:25:25.244179 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-05 01:25:25.244185 | orchestrator | Sunday 05 April 2026 01:25:17 +0000 (0:00:00.526) 0:00:19.463 ********** 2026-04-05 01:25:25.244192 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:25.244198 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:25.244205 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:25.244226 | orchestrator | 2026-04-05 01:25:25.244233 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-05 01:25:25.244239 | orchestrator | Sunday 05 April 2026 01:25:18 +0000 (0:00:00.507) 0:00:19.970 ********** 2026-04-05 01:25:25.244245 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:25.244251 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:25.244257 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:25.244264 | orchestrator | 2026-04-05 01:25:25.244270 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-05 01:25:25.244276 | orchestrator | Sunday 05 April 2026 01:25:18 +0000 (0:00:00.302) 0:00:20.273 ********** 2026-04-05 01:25:25.244283 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:25.244290 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:25:25.244296 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:25:25.244302 | orchestrator | 2026-04-05 01:25:25.244308 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-05 01:25:25.244314 | orchestrator | Sunday 05 April 2026 01:25:19 +0000 (0:00:00.533) 0:00:20.807 ********** 2026-04-05 01:25:25.244320 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:25:25.244326 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:25:25.244332 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:25:25.244338 | orchestrator | 2026-04-05 01:25:25.244344 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 01:25:25.244351 | orchestrator | Sunday 05 April 2026 01:25:19 +0000 (0:00:00.352) 0:00:21.159 ********** 2026-04-05 01:25:25.244357 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:25:25.244364 | orchestrator | 2026-04-05 01:25:25.244370 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 01:25:25.244376 | orchestrator | Sunday 05 April 2026 01:25:19 +0000 (0:00:00.270) 0:00:21.429 ********** 2026-04-05 01:25:25.244382 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:25:25.244388 | orchestrator | 2026-04-05 01:25:25.244394 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:25:25.244400 | orchestrator | Sunday 05 April 2026 01:25:20 +0000 (0:00:00.274) 0:00:21.703 ********** 2026-04-05 01:25:25.244418 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:25:25.244425 | orchestrator | 2026-04-05 01:25:25.244431 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:25:25.244437 | orchestrator | Sunday 05 April 2026 01:25:22 +0000 (0:00:01.947) 0:00:23.651 ********** 2026-04-05 01:25:25.244443 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:25:25.244450 | orchestrator | 2026-04-05 01:25:25.244456 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:25:25.244462 | orchestrator | Sunday 05 April 2026 01:25:22 +0000 (0:00:00.266) 0:00:23.918 ********** 2026-04-05 01:25:25.244468 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:25:25.244475 | orchestrator | 2026-04-05 01:25:25.244481 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:25:25.244487 | orchestrator | Sunday 05 April 2026 01:25:22 +0000 (0:00:00.282) 0:00:24.201 ********** 2026-04-05 01:25:25.244493 | orchestrator | 2026-04-05 01:25:25.244499 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:25:25.244505 | orchestrator | Sunday 05 April 2026 01:25:22 +0000 (0:00:00.273) 0:00:24.474 ********** 2026-04-05 01:25:25.244512 | orchestrator | 2026-04-05 01:25:25.244518 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:25:25.244524 | orchestrator | Sunday 05 April 2026 01:25:22 +0000 (0:00:00.067) 0:00:24.541 ********** 2026-04-05 01:25:25.244530 | orchestrator | 2026-04-05 01:25:25.244536 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 01:25:25.244542 | orchestrator | Sunday 05 April 2026 01:25:23 +0000 (0:00:00.071) 0:00:24.613 ********** 2026-04-05 01:25:25.244548 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:25:25.244559 | orchestrator | 2026-04-05 01:25:25.244565 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:25:25.244571 | orchestrator | Sunday 05 April 2026 01:25:24 +0000 (0:00:01.419) 0:00:26.032 ********** 2026-04-05 01:25:25.244577 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-05 01:25:25.244584 | orchestrator |  "msg": [ 2026-04-05 01:25:25.244591 | orchestrator |  "Validator run completed.", 2026-04-05 01:25:25.244599 | orchestrator |  "You can find the report file here:", 2026-04-05 01:25:25.244607 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-05T01:24:59+00:00-report.json", 2026-04-05 01:25:25.244614 | orchestrator |  "on the following host:", 2026-04-05 01:25:25.244622 | orchestrator |  "testbed-manager" 2026-04-05 01:25:25.244629 | orchestrator |  ] 2026-04-05 01:25:25.244646 | orchestrator | } 2026-04-05 01:25:25.244654 | orchestrator | 2026-04-05 01:25:25.244661 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:25:25.244676 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 01:25:25.244685 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 01:25:25.244706 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 01:25:25.244714 | orchestrator | 2026-04-05 01:25:25.244721 | orchestrator | 2026-04-05 01:25:25.244728 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:25:25.244736 | orchestrator | Sunday 05 April 2026 01:25:24 +0000 (0:00:00.452) 0:00:26.485 ********** 2026-04-05 01:25:25.244743 | orchestrator | =============================================================================== 2026-04-05 01:25:25.244750 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.07s 2026-04-05 01:25:25.244757 | orchestrator | Aggregate test results step one ----------------------------------------- 1.95s 2026-04-05 01:25:25.244764 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.77s 2026-04-05 01:25:25.244771 | orchestrator | Write report file ------------------------------------------------------- 1.42s 2026-04-05 01:25:25.244778 | orchestrator | Get timestamp for report file ------------------------------------------- 1.02s 2026-04-05 01:25:25.244786 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.97s 2026-04-05 01:25:25.244793 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.71s 2026-04-05 01:25:25.244800 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-04-05 01:25:25.244807 | orchestrator | Print report file information ------------------------------------------- 0.67s 2026-04-05 01:25:25.244815 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.53s 2026-04-05 01:25:25.244822 | orchestrator | Prepare test data ------------------------------------------------------- 0.53s 2026-04-05 01:25:25.244829 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2026-04-05 01:25:25.244836 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.51s 2026-04-05 01:25:25.244843 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.51s 2026-04-05 01:25:25.244850 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.51s 2026-04-05 01:25:25.244857 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-04-05 01:25:25.244865 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.49s 2026-04-05 01:25:25.244872 | orchestrator | Print report file information ------------------------------------------- 0.45s 2026-04-05 01:25:25.244880 | orchestrator | Flush handlers ---------------------------------------------------------- 0.41s 2026-04-05 01:25:25.244891 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.39s 2026-04-05 01:25:25.464870 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-05 01:25:25.475778 | orchestrator | + set -e 2026-04-05 01:25:25.475871 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 01:25:25.475906 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 01:25:25.475929 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 01:25:25.475941 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 01:25:25.475952 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 01:25:25.475964 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 01:25:25.475977 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 01:25:25.476010 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:25:25.476022 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:25:25.476033 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-05 01:25:25.476044 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-05 01:25:25.476055 | orchestrator | ++ export ARA=false 2026-04-05 01:25:25.476066 | orchestrator | ++ ARA=false 2026-04-05 01:25:25.476077 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 01:25:25.476088 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 01:25:25.476099 | orchestrator | ++ export TEMPEST=true 2026-04-05 01:25:25.476111 | orchestrator | ++ TEMPEST=true 2026-04-05 01:25:25.476122 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 01:25:25.476133 | orchestrator | ++ IS_ZUUL=true 2026-04-05 01:25:25.476144 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 01:25:25.476155 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 01:25:25.476166 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 01:25:25.476177 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 01:25:25.476188 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 01:25:25.476199 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 01:25:25.476210 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 01:25:25.476221 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 01:25:25.476232 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 01:25:25.476243 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 01:25:25.476254 | orchestrator | + source /etc/os-release 2026-04-05 01:25:25.476265 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-05 01:25:25.476277 | orchestrator | ++ NAME=Ubuntu 2026-04-05 01:25:25.476288 | orchestrator | ++ VERSION_ID=24.04 2026-04-05 01:25:25.476299 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-05 01:25:25.476310 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-05 01:25:25.476321 | orchestrator | ++ ID=ubuntu 2026-04-05 01:25:25.476333 | orchestrator | ++ ID_LIKE=debian 2026-04-05 01:25:25.476346 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-05 01:25:25.476358 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-05 01:25:25.476370 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-05 01:25:25.476383 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-05 01:25:25.476396 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-05 01:25:25.476408 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-05 01:25:25.476421 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-05 01:25:25.476435 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-05 01:25:25.476449 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-05 01:25:25.507080 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-05 01:25:51.541699 | orchestrator | 2026-04-05 01:25:51.541813 | orchestrator | # Status of Elasticsearch 2026-04-05 01:25:51.541830 | orchestrator | 2026-04-05 01:25:51.541843 | orchestrator | + pushd /opt/configuration/contrib 2026-04-05 01:25:51.541855 | orchestrator | + echo 2026-04-05 01:25:51.541867 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-05 01:25:51.541878 | orchestrator | + echo 2026-04-05 01:25:51.541889 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-05 01:25:51.734658 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-05 01:25:51.734748 | orchestrator | 2026-04-05 01:25:51.734758 | orchestrator | # Status of MariaDB 2026-04-05 01:25:51.734859 | orchestrator | 2026-04-05 01:25:51.734870 | orchestrator | + echo 2026-04-05 01:25:51.734878 | orchestrator | + echo '# Status of MariaDB' 2026-04-05 01:25:51.734886 | orchestrator | + echo 2026-04-05 01:25:51.734903 | orchestrator | ++ semver latest 10.0.0-0 2026-04-05 01:25:51.791886 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 01:25:51.791962 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 01:25:51.791974 | orchestrator | + osism status database 2026-04-05 01:25:53.504333 | orchestrator | 2026-04-05 01:25:53 | ERROR  | Unable to get ansible vault password 2026-04-05 01:25:53.504415 | orchestrator | 2026-04-05 01:25:53 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:25:53.504427 | orchestrator | 2026-04-05 01:25:53 | ERROR  | Dropping encrypted entries 2026-04-05 01:25:53.539330 | orchestrator | 2026-04-05 01:25:53 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-05 01:25:53.550269 | orchestrator | 2026-04-05 01:25:53 | INFO  | Cluster Status: Primary 2026-04-05 01:25:53.550362 | orchestrator | 2026-04-05 01:25:53 | INFO  | Connected: ON 2026-04-05 01:25:53.550726 | orchestrator | 2026-04-05 01:25:53 | INFO  | Ready: ON 2026-04-05 01:25:53.550838 | orchestrator | 2026-04-05 01:25:53 | INFO  | Cluster Size: 3 2026-04-05 01:25:53.550857 | orchestrator | 2026-04-05 01:25:53 | INFO  | Local State: Synced 2026-04-05 01:25:53.550869 | orchestrator | 2026-04-05 01:25:53 | INFO  | Cluster State UUID: f3b7f4d6-308a-11f1-b460-e72a95be0344 2026-04-05 01:25:53.550882 | orchestrator | 2026-04-05 01:25:53 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-04-05 01:25:53.550894 | orchestrator | 2026-04-05 01:25:53 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-05 01:25:53.550905 | orchestrator | 2026-04-05 01:25:53 | INFO  | Local Node UUID: 28f68fba-308b-11f1-bb2a-dbfb6333f91e 2026-04-05 01:25:53.550917 | orchestrator | 2026-04-05 01:25:53 | INFO  | Flow Control Paused: 0.00% 2026-04-05 01:25:53.550951 | orchestrator | 2026-04-05 01:25:53 | INFO  | Recv Queue Avg: 0.0149254 2026-04-05 01:25:53.550963 | orchestrator | 2026-04-05 01:25:53 | INFO  | Send Queue Avg: 0.00087668 2026-04-05 01:25:53.550979 | orchestrator | 2026-04-05 01:25:53 | INFO  | Transactions: 4584 local commits, 6785 replicated, 67 received 2026-04-05 01:25:53.550997 | orchestrator | 2026-04-05 01:25:53 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-05 01:25:53.551073 | orchestrator | 2026-04-05 01:25:53 | INFO  | MariaDB Uptime: 23 minutes, 6 seconds 2026-04-05 01:25:53.551086 | orchestrator | 2026-04-05 01:25:53 | INFO  | Threads: 152 connected, 1 running 2026-04-05 01:25:53.551097 | orchestrator | 2026-04-05 01:25:53 | INFO  | Queries: 187449 total, 0 slow 2026-04-05 01:25:53.551108 | orchestrator | 2026-04-05 01:25:53 | INFO  | Aborted Connects: 162 2026-04-05 01:25:53.551119 | orchestrator | 2026-04-05 01:25:53 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-05 01:25:53.803366 | orchestrator | 2026-04-05 01:25:53.803485 | orchestrator | # Status of Prometheus 2026-04-05 01:25:53.803503 | orchestrator | 2026-04-05 01:25:53.803515 | orchestrator | + echo 2026-04-05 01:25:53.803526 | orchestrator | + echo '# Status of Prometheus' 2026-04-05 01:25:53.803538 | orchestrator | + echo 2026-04-05 01:25:53.803549 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-05 01:25:53.882633 | orchestrator | Unauthorized 2026-04-05 01:25:53.885832 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-05 01:25:53.945770 | orchestrator | Unauthorized 2026-04-05 01:25:53.949455 | orchestrator | 2026-04-05 01:25:53.949518 | orchestrator | # Status of RabbitMQ 2026-04-05 01:25:53.949533 | orchestrator | 2026-04-05 01:25:53.949545 | orchestrator | + echo 2026-04-05 01:25:53.949557 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-05 01:25:53.949570 | orchestrator | + echo 2026-04-05 01:25:53.950139 | orchestrator | ++ semver latest 10.0.0-0 2026-04-05 01:25:54.003895 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 01:25:54.003987 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 01:25:54.004023 | orchestrator | + osism status messaging 2026-04-05 01:26:01.762889 | orchestrator | 2026-04-05 01:26:01 | ERROR  | Unable to get ansible vault password 2026-04-05 01:26:01.763046 | orchestrator | 2026-04-05 01:26:01 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:26:01.763073 | orchestrator | 2026-04-05 01:26:01 | ERROR  | Dropping encrypted entries 2026-04-05 01:26:01.796393 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-05 01:26:01.870604 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] RabbitMQ Version: 4.1.8 2026-04-05 01:26:01.870696 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Erlang Version: 27.3.4.1 2026-04-05 01:26:01.870711 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-05 01:26:01.870723 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-05 01:26:01.870735 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:26:01.870748 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:26:01.870759 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-05 01:26:01.870782 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Connections: 206, Channels: 205, Queues: 173 2026-04-05 01:26:01.870794 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Messages: 230 total, 230 ready, 0 unacked 2026-04-05 01:26:01.870805 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Message Rates: 8.8/s publish, 9.6/s deliver 2026-04-05 01:26:01.871216 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Disk Free: 58.2 GB (limit: 0.0 GB) 2026-04-05 01:26:01.871240 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Memory Used: 0.15 GB (limit: 18.80 GB) 2026-04-05 01:26:01.871528 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] File Descriptors: 112/1024 2026-04-05 01:26:01.871548 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-0] Sockets: 0/0 2026-04-05 01:26:01.871779 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-05 01:26:01.929436 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] RabbitMQ Version: 4.1.8 2026-04-05 01:26:01.929528 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Erlang Version: 27.3.4.1 2026-04-05 01:26:01.929543 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-05 01:26:01.929555 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-05 01:26:01.929578 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:26:01.929591 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:26:01.929602 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-05 01:26:01.929634 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Connections: 206, Channels: 205, Queues: 173 2026-04-05 01:26:01.929672 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Messages: 230 total, 230 ready, 0 unacked 2026-04-05 01:26:01.929683 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Message Rates: 8.8/s publish, 9.6/s deliver 2026-04-05 01:26:01.929694 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-05 01:26:01.929705 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Memory Used: 0.15 GB (limit: 18.80 GB) 2026-04-05 01:26:01.929716 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] File Descriptors: 100/1024 2026-04-05 01:26:01.930008 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-1] Sockets: 0/0 2026-04-05 01:26:01.930217 | orchestrator | 2026-04-05 01:26:01 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-05 01:26:02.013578 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] RabbitMQ Version: 4.1.8 2026-04-05 01:26:02.013669 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Erlang Version: 27.3.4.1 2026-04-05 01:26:02.014544 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-05 01:26:02.014622 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-05 01:26:02.014639 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:26:02.014652 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:26:02.014664 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-05 01:26:02.014687 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Connections: 206, Channels: 205, Queues: 173 2026-04-05 01:26:02.014699 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Messages: 230 total, 230 ready, 0 unacked 2026-04-05 01:26:02.015300 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Message Rates: 8.8/s publish, 9.6/s deliver 2026-04-05 01:26:02.015324 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Disk Free: 58.1 GB (limit: 0.0 GB) 2026-04-05 01:26:02.015335 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Memory Used: 0.15 GB (limit: 18.80 GB) 2026-04-05 01:26:02.015346 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] File Descriptors: 114/1024 2026-04-05 01:26:02.016106 | orchestrator | 2026-04-05 01:26:02 | INFO  | [testbed-node-2] Sockets: 0/0 2026-04-05 01:26:02.016138 | orchestrator | 2026-04-05 01:26:02 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-05 01:26:02.336194 | orchestrator | 2026-04-05 01:26:02.336298 | orchestrator | # Status of Redis 2026-04-05 01:26:02.336315 | orchestrator | 2026-04-05 01:26:02.336328 | orchestrator | + echo 2026-04-05 01:26:02.336339 | orchestrator | + echo '# Status of Redis' 2026-04-05 01:26:02.336352 | orchestrator | + echo 2026-04-05 01:26:02.336365 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-05 01:26:02.341209 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001806s;;;0.000000;10.000000 2026-04-05 01:26:02.342484 | orchestrator | 2026-04-05 01:26:02.342545 | orchestrator | # Create backup of MariaDB database 2026-04-05 01:26:02.342555 | orchestrator | 2026-04-05 01:26:02.342562 | orchestrator | + popd 2026-04-05 01:26:02.342569 | orchestrator | + echo 2026-04-05 01:26:02.342576 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-05 01:26:02.342582 | orchestrator | + echo 2026-04-05 01:26:02.342590 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-05 01:26:03.724466 | orchestrator | 2026-04-05 01:26:03 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-05 01:26:03.852321 | orchestrator | 2026-04-05 01:26:03 | INFO  | Task 4c0d55af-debd-4c97-ae38-c3c2193f45a2 (mariadb_backup) was prepared for execution. 2026-04-05 01:26:03.852372 | orchestrator | 2026-04-05 01:26:03 | INFO  | It takes a moment until task 4c0d55af-debd-4c97-ae38-c3c2193f45a2 (mariadb_backup) has been started and output is visible here. 2026-04-05 01:27:30.905312 | orchestrator | 2026-04-05 01:27:30.905423 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:27:30.905438 | orchestrator | 2026-04-05 01:27:30.905449 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:27:30.905458 | orchestrator | Sunday 05 April 2026 01:26:07 +0000 (0:00:00.278) 0:00:00.278 ********** 2026-04-05 01:27:30.905468 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:27:30.905478 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:27:30.905487 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:27:30.905495 | orchestrator | 2026-04-05 01:27:30.905501 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:27:30.905507 | orchestrator | Sunday 05 April 2026 01:26:07 +0000 (0:00:00.342) 0:00:00.621 ********** 2026-04-05 01:27:30.905526 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-05 01:27:30.905533 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-05 01:27:30.905537 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-05 01:27:30.905542 | orchestrator | 2026-04-05 01:27:30.905547 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-05 01:27:30.905552 | orchestrator | 2026-04-05 01:27:30.905557 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-05 01:27:30.905562 | orchestrator | Sunday 05 April 2026 01:26:08 +0000 (0:00:00.451) 0:00:01.072 ********** 2026-04-05 01:27:30.905568 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 01:27:30.905573 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 01:27:30.905578 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 01:27:30.905582 | orchestrator | 2026-04-05 01:27:30.905587 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 01:27:30.905592 | orchestrator | Sunday 05 April 2026 01:26:08 +0000 (0:00:00.413) 0:00:01.486 ********** 2026-04-05 01:27:30.905597 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:27:30.905604 | orchestrator | 2026-04-05 01:27:30.905608 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-05 01:27:30.905613 | orchestrator | Sunday 05 April 2026 01:26:09 +0000 (0:00:00.681) 0:00:02.167 ********** 2026-04-05 01:27:30.905618 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:27:30.905623 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:27:30.905628 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:27:30.905632 | orchestrator | 2026-04-05 01:27:30.905637 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-05 01:27:30.905642 | orchestrator | Sunday 05 April 2026 01:26:13 +0000 (0:00:03.885) 0:00:06.053 ********** 2026-04-05 01:27:30.905647 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:27:30.905655 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:27:30.905664 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:27:30.905671 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-05 01:27:30.905679 | orchestrator | 2026-04-05 01:27:30.905686 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-05 01:27:30.905694 | orchestrator | skipping: no hosts matched 2026-04-05 01:27:30.905702 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-05 01:27:30.905709 | orchestrator | 2026-04-05 01:27:30.905716 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 01:27:30.905743 | orchestrator | skipping: no hosts matched 2026-04-05 01:27:30.905749 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-05 01:27:30.905754 | orchestrator | mariadb_bootstrap_restart 2026-04-05 01:27:30.905759 | orchestrator | 2026-04-05 01:27:30.905764 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-05 01:27:30.905769 | orchestrator | skipping: no hosts matched 2026-04-05 01:27:30.905773 | orchestrator | 2026-04-05 01:27:30.905778 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-05 01:27:30.905783 | orchestrator | 2026-04-05 01:27:30.905788 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-05 01:27:30.905792 | orchestrator | Sunday 05 April 2026 01:27:30 +0000 (0:01:16.995) 0:01:23.049 ********** 2026-04-05 01:27:30.905797 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:27:30.905802 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:27:30.905807 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:27:30.905812 | orchestrator | 2026-04-05 01:27:30.905817 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-05 01:27:30.905821 | orchestrator | Sunday 05 April 2026 01:27:30 +0000 (0:00:00.329) 0:01:23.378 ********** 2026-04-05 01:27:30.905826 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:27:30.905831 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:27:30.905836 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:27:30.905840 | orchestrator | 2026-04-05 01:27:30.905846 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:27:30.905853 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:27:30.905860 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 01:27:30.905866 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 01:27:30.905872 | orchestrator | 2026-04-05 01:27:30.905878 | orchestrator | 2026-04-05 01:27:30.905883 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:27:30.905889 | orchestrator | Sunday 05 April 2026 01:27:30 +0000 (0:00:00.221) 0:01:23.600 ********** 2026-04-05 01:27:30.905895 | orchestrator | =============================================================================== 2026-04-05 01:27:30.905901 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 77.00s 2026-04-05 01:27:30.905921 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.89s 2026-04-05 01:27:30.905927 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.68s 2026-04-05 01:27:30.905932 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-04-05 01:27:30.905938 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2026-04-05 01:27:30.905944 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-04-05 01:27:30.905950 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.33s 2026-04-05 01:27:30.905955 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.22s 2026-04-05 01:27:31.109619 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-05 01:27:31.119038 | orchestrator | + set -e 2026-04-05 01:27:31.119152 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 01:27:31.119168 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 01:27:31.119182 | orchestrator | ++ INTERACTIVE=false 2026-04-05 01:27:31.119193 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 01:27:31.119204 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 01:27:31.119215 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 01:27:31.120438 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 01:27:31.125246 | orchestrator | 2026-04-05 01:27:31.125334 | orchestrator | # OpenStack endpoints 2026-04-05 01:27:31.125349 | orchestrator | 2026-04-05 01:27:31.125361 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:27:31.125373 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:27:31.125384 | orchestrator | + export OS_CLOUD=admin 2026-04-05 01:27:31.125394 | orchestrator | + OS_CLOUD=admin 2026-04-05 01:27:31.125406 | orchestrator | + echo 2026-04-05 01:27:31.125416 | orchestrator | + echo '# OpenStack endpoints' 2026-04-05 01:27:31.125427 | orchestrator | + echo 2026-04-05 01:27:31.125439 | orchestrator | + openstack endpoint list 2026-04-05 01:27:34.460013 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 01:27:34.460178 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-05 01:27:34.460206 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 01:27:34.460226 | orchestrator | | 03186314dec9461eb30b719618d53a27 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-05 01:27:34.460246 | orchestrator | | 03b86c49fa704fb7a812878088be3b2d | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-05 01:27:34.460266 | orchestrator | | 14b9b96b046d48e69f5a80f68d88dcb2 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-05 01:27:34.460285 | orchestrator | | 20c5fc96a78440fe99e99088c876069a | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-05 01:27:34.460305 | orchestrator | | 22836afd8bec4ed78a9f5621b5d5ea14 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-05 01:27:34.460324 | orchestrator | | 319563d4e529429882b302242a631652 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-05 01:27:34.460343 | orchestrator | | 3a431b317f0442419fffa48fddc6d409 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-05 01:27:34.460363 | orchestrator | | 3e063cf0e3e349b6af9de693248eec49 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-05 01:27:34.460382 | orchestrator | | 41c56e2ebba04045b4647329d65fdb5f | RegionOne | cinder | block-storage | True | public | https://api.testbed.osism.xyz:8776/v3 | 2026-04-05 01:27:34.460401 | orchestrator | | 4a430a35365240eebe6fcac0588a7699 | RegionOne | cinder | block-storage | True | internal | https://api-int.testbed.osism.xyz:8776/v3 | 2026-04-05 01:27:34.460421 | orchestrator | | 5825b76c35cf40cbb91afb2d65da9b92 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-05 01:27:34.460440 | orchestrator | | 5ab67e7fb6d04bc99716a7c66268f40f | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-05 01:27:34.460459 | orchestrator | | 5f24d8c287424b5f819da85ec2425fbe | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-05 01:27:34.460478 | orchestrator | | 61344768e5124014bbdacbb97e2f1888 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-05 01:27:34.460497 | orchestrator | | 6db4835dfaa94767b97930c2e6ea19d1 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-05 01:27:34.460550 | orchestrator | | 7e76b64ff1ff4da2ba02abedfe7fa1a9 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-05 01:27:34.460571 | orchestrator | | 7eef0f622f5b4aa5907a81151eaa5394 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-05 01:27:34.460593 | orchestrator | | 84cc1d55f66c4ac78366ff535041d501 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-05 01:27:34.460615 | orchestrator | | 8b9f69d6b4be4af2a30c3a1fb28d908f | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-05 01:27:34.460637 | orchestrator | | bd0c6680a75f4d3584fade2e5aa58cb4 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-05 01:27:34.460707 | orchestrator | | be6d4f040bab47859873eefdc11e9489 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-05 01:27:34.460730 | orchestrator | | c509c9d747c6476693c2bb0dfa6bf1c7 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-05 01:27:34.460751 | orchestrator | | ce9a4cd132b646a0b5d745be1555a834 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-05 01:27:34.460771 | orchestrator | | e7883caf20ba40f588a7be8f667c0676 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-05 01:27:34.460792 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 01:27:34.753679 | orchestrator | 2026-04-05 01:27:34.753771 | orchestrator | # Cinder 2026-04-05 01:27:34.753784 | orchestrator | + echo 2026-04-05 01:27:34.753795 | orchestrator | + echo '# Cinder' 2026-04-05 01:27:34.754160 | orchestrator | 2026-04-05 01:27:34.754182 | orchestrator | + echo 2026-04-05 01:27:34.754193 | orchestrator | + openstack volume service list 2026-04-05 01:27:37.455251 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 01:27:37.455362 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-05 01:27:37.455378 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 01:27:37.455390 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-05T01:27:27.000000 | 2026-04-05 01:27:37.455401 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-05T01:27:37.000000 | 2026-04-05 01:27:37.455413 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-05T01:27:28.000000 | 2026-04-05 01:27:37.455424 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-05T01:27:27.000000 | 2026-04-05 01:27:37.455435 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-05T01:27:29.000000 | 2026-04-05 01:27:37.455446 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-05T01:27:29.000000 | 2026-04-05 01:27:37.455457 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-05T01:27:32.000000 | 2026-04-05 01:27:37.455469 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-05T01:27:36.000000 | 2026-04-05 01:27:37.455480 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-05T01:27:36.000000 | 2026-04-05 01:27:37.455492 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 01:27:37.733628 | orchestrator | 2026-04-05 01:27:37.733726 | orchestrator | # Neutron 2026-04-05 01:27:37.733741 | orchestrator | 2026-04-05 01:27:37.733753 | orchestrator | + echo 2026-04-05 01:27:37.733765 | orchestrator | + echo '# Neutron' 2026-04-05 01:27:37.733777 | orchestrator | + echo 2026-04-05 01:27:37.733789 | orchestrator | + openstack network agent list 2026-04-05 01:27:40.557957 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 01:27:40.558240 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-05 01:27:40.558276 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 01:27:40.558296 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-05 01:27:40.558315 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-05 01:27:40.558332 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-05 01:27:40.558349 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-05 01:27:40.558392 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-05 01:27:40.558411 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-05 01:27:40.558427 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 01:27:40.558445 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 01:27:40.558463 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 01:27:40.558481 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 01:27:40.854922 | orchestrator | + openstack network service provider list 2026-04-05 01:27:43.517793 | orchestrator | +---------------+------+---------+ 2026-04-05 01:27:43.517906 | orchestrator | | Service Type | Name | Default | 2026-04-05 01:27:43.517920 | orchestrator | +---------------+------+---------+ 2026-04-05 01:27:43.517932 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-05 01:27:43.517943 | orchestrator | +---------------+------+---------+ 2026-04-05 01:27:43.809781 | orchestrator | 2026-04-05 01:27:43.809893 | orchestrator | # Nova 2026-04-05 01:27:43.809909 | orchestrator | 2026-04-05 01:27:43.809920 | orchestrator | + echo 2026-04-05 01:27:43.809930 | orchestrator | + echo '# Nova' 2026-04-05 01:27:43.809941 | orchestrator | + echo 2026-04-05 01:27:43.809951 | orchestrator | + openstack compute service list 2026-04-05 01:27:46.745301 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 01:27:46.745382 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-05 01:27:46.745391 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 01:27:46.745398 | orchestrator | | 39cd988b-5b41-4669-8388-d3a694aab323 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-05T01:27:39.000000 | 2026-04-05 01:27:46.745428 | orchestrator | | eea131cb-5e8f-41c3-8480-cebc602b369b | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-05T01:27:36.000000 | 2026-04-05 01:27:46.745436 | orchestrator | | f811beb0-db95-4a87-81ac-a76f4f58af69 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-05T01:27:38.000000 | 2026-04-05 01:27:46.745442 | orchestrator | | 5ac68441-b279-4809-bb62-05306e00cb1e | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-05T01:27:38.000000 | 2026-04-05 01:27:46.745448 | orchestrator | | b6158759-6e97-47d8-897f-a9b32aa98b6c | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-05T01:27:40.000000 | 2026-04-05 01:27:46.745454 | orchestrator | | 78e505c9-6c08-4757-988a-1abab296b038 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-05T01:27:41.000000 | 2026-04-05 01:27:46.745461 | orchestrator | | 689c267a-42b8-4e93-a71f-869075e5da31 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-05T01:27:44.000000 | 2026-04-05 01:27:46.745467 | orchestrator | | 12c30507-b1f2-460e-a3a4-a966e9ea00bd | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-05T01:27:44.000000 | 2026-04-05 01:27:46.745473 | orchestrator | | 47d912da-7df9-4463-9c65-97c06d3d9bb5 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-05T01:27:44.000000 | 2026-04-05 01:27:46.745479 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 01:27:47.051848 | orchestrator | + openstack hypervisor list 2026-04-05 01:27:50.293765 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 01:27:50.294729 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-05 01:27:50.294778 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 01:27:50.294797 | orchestrator | | 30b9b0cf-3cdf-44af-baf8-84818e4f6c93 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-05 01:27:50.294815 | orchestrator | | 4a528778-78b5-4d6c-8138-1242dc1e4740 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-05 01:27:50.294831 | orchestrator | | 7e06e7c8-2963-428b-8917-4028f99103dc | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-05 01:27:50.294848 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 01:27:50.598233 | orchestrator | 2026-04-05 01:27:50.598345 | orchestrator | # Run OpenStack test play 2026-04-05 01:27:50.598362 | orchestrator | 2026-04-05 01:27:50.598375 | orchestrator | + echo 2026-04-05 01:27:50.598387 | orchestrator | + echo '# Run OpenStack test play' 2026-04-05 01:27:50.598400 | orchestrator | + echo 2026-04-05 01:27:50.598411 | orchestrator | + osism apply --environment openstack test 2026-04-05 01:27:51.925803 | orchestrator | 2026-04-05 01:27:51 | INFO  | Trying to run play test in environment openstack 2026-04-05 01:28:01.986365 | orchestrator | 2026-04-05 01:28:01 | INFO  | Prepare task for execution of test. 2026-04-05 01:28:02.098878 | orchestrator | 2026-04-05 01:28:02 | INFO  | Task 42213944-9ce3-4cef-b134-388eaf68e646 (test) was prepared for execution. 2026-04-05 01:28:02.099011 | orchestrator | 2026-04-05 01:28:02 | INFO  | It takes a moment until task 42213944-9ce3-4cef-b134-388eaf68e646 (test) has been started and output is visible here. 2026-04-05 01:31:25.198896 | orchestrator | 2026-04-05 01:31:25.199006 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-05 01:31:25.199021 | orchestrator | 2026-04-05 01:31:25.199032 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-05 01:31:25.199043 | orchestrator | Sunday 05 April 2026 01:28:05 +0000 (0:00:00.106) 0:00:00.106 ********** 2026-04-05 01:31:25.199053 | orchestrator | changed: [localhost] 2026-04-05 01:31:25.199065 | orchestrator | 2026-04-05 01:31:25.199075 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-05 01:31:25.199085 | orchestrator | Sunday 05 April 2026 01:28:09 +0000 (0:00:03.920) 0:00:04.027 ********** 2026-04-05 01:31:25.199117 | orchestrator | changed: [localhost] 2026-04-05 01:31:25.199127 | orchestrator | 2026-04-05 01:31:25.199137 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-05 01:31:25.199147 | orchestrator | Sunday 05 April 2026 01:28:14 +0000 (0:00:04.653) 0:00:08.681 ********** 2026-04-05 01:31:25.199157 | orchestrator | changed: [localhost] 2026-04-05 01:31:25.199167 | orchestrator | 2026-04-05 01:31:25.199177 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-05 01:31:25.199187 | orchestrator | Sunday 05 April 2026 01:28:20 +0000 (0:00:06.764) 0:00:15.446 ********** 2026-04-05 01:31:25.199196 | orchestrator | changed: [localhost] 2026-04-05 01:31:25.199206 | orchestrator | 2026-04-05 01:31:25.199216 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-05 01:31:25.199225 | orchestrator | Sunday 05 April 2026 01:28:25 +0000 (0:00:04.392) 0:00:19.839 ********** 2026-04-05 01:31:25.199235 | orchestrator | changed: [localhost] 2026-04-05 01:31:25.199245 | orchestrator | 2026-04-05 01:31:25.199319 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-05 01:31:25.199330 | orchestrator | Sunday 05 April 2026 01:28:29 +0000 (0:00:04.477) 0:00:24.317 ********** 2026-04-05 01:31:25.199340 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-05 01:31:25.199351 | orchestrator | changed: [localhost] => (item=member) 2026-04-05 01:31:25.199362 | orchestrator | changed: [localhost] => (item=creator) 2026-04-05 01:31:25.199372 | orchestrator | 2026-04-05 01:31:25.199382 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-05 01:31:25.199392 | orchestrator | Sunday 05 April 2026 01:28:42 +0000 (0:00:12.762) 0:00:37.079 ********** 2026-04-05 01:31:25.199402 | orchestrator | changed: [localhost] 2026-04-05 01:31:25.199411 | orchestrator | 2026-04-05 01:31:25.199421 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-05 01:31:25.199431 | orchestrator | Sunday 05 April 2026 01:28:47 +0000 (0:00:04.624) 0:00:41.704 ********** 2026-04-05 01:31:25.199443 | orchestrator | changed: [localhost] 2026-04-05 01:31:25.199454 | orchestrator | 2026-04-05 01:31:25.199466 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-05 01:31:25.199476 | orchestrator | Sunday 05 April 2026 01:28:52 +0000 (0:00:05.174) 0:00:46.878 ********** 2026-04-05 01:31:25.199487 | orchestrator | changed: [localhost] 2026-04-05 01:31:25.199498 | orchestrator | 2026-04-05 01:31:25.199509 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-05 01:31:25.199520 | orchestrator | Sunday 05 April 2026 01:28:56 +0000 (0:00:04.601) 0:00:51.479 ********** 2026-04-05 01:31:25.199532 | orchestrator | changed: [localhost] 2026-04-05 01:31:25.199543 | orchestrator | 2026-04-05 01:31:25.199554 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-05 01:31:25.199565 | orchestrator | Sunday 05 April 2026 01:29:01 +0000 (0:00:04.258) 0:00:55.737 ********** 2026-04-05 01:31:25.199577 | orchestrator | changed: [localhost] 2026-04-05 01:31:25.199588 | orchestrator | 2026-04-05 01:31:25.199599 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-05 01:31:25.199610 | orchestrator | Sunday 05 April 2026 01:29:05 +0000 (0:00:04.388) 0:01:00.126 ********** 2026-04-05 01:31:25.199621 | orchestrator | changed: [localhost] 2026-04-05 01:31:25.199632 | orchestrator | 2026-04-05 01:31:25.199644 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-05 01:31:25.199656 | orchestrator | Sunday 05 April 2026 01:29:09 +0000 (0:00:03.919) 0:01:04.045 ********** 2026-04-05 01:31:25.199667 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-05 01:31:25.199678 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-05 01:31:25.199690 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-05 01:31:25.199700 | orchestrator | 2026-04-05 01:31:25.199711 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-05 01:31:25.199723 | orchestrator | Sunday 05 April 2026 01:29:24 +0000 (0:00:14.896) 0:01:18.942 ********** 2026-04-05 01:31:25.199742 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-05 01:31:25.199753 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-05 01:31:25.199764 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-05 01:31:25.199776 | orchestrator | 2026-04-05 01:31:25.199787 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-05 01:31:25.199798 | orchestrator | Sunday 05 April 2026 01:29:41 +0000 (0:00:17.160) 0:01:36.103 ********** 2026-04-05 01:31:25.199808 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-05 01:31:25.199818 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-05 01:31:25.199827 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-05 01:31:25.199837 | orchestrator | 2026-04-05 01:31:25.199847 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-05 01:31:25.199856 | orchestrator | 2026-04-05 01:31:25.199881 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-05 01:31:25.199908 | orchestrator | Sunday 05 April 2026 01:30:16 +0000 (0:00:34.966) 0:02:11.069 ********** 2026-04-05 01:31:25.199919 | orchestrator | ok: [localhost] 2026-04-05 01:31:25.199930 | orchestrator | 2026-04-05 01:31:25.199940 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-05 01:31:25.199950 | orchestrator | Sunday 05 April 2026 01:30:20 +0000 (0:00:03.908) 0:02:14.978 ********** 2026-04-05 01:31:25.199960 | orchestrator | skipping: [localhost] 2026-04-05 01:31:25.199969 | orchestrator | 2026-04-05 01:31:25.199979 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-05 01:31:25.199989 | orchestrator | Sunday 05 April 2026 01:30:20 +0000 (0:00:00.043) 0:02:15.022 ********** 2026-04-05 01:31:25.199998 | orchestrator | skipping: [localhost] 2026-04-05 01:31:25.200008 | orchestrator | 2026-04-05 01:31:25.200018 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-05 01:31:25.200027 | orchestrator | Sunday 05 April 2026 01:30:20 +0000 (0:00:00.046) 0:02:15.068 ********** 2026-04-05 01:31:25.200037 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-05 01:31:25.200046 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-05 01:31:25.200056 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-05 01:31:25.200066 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-05 01:31:25.200075 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-05 01:31:25.200085 | orchestrator | skipping: [localhost] 2026-04-05 01:31:25.200095 | orchestrator | 2026-04-05 01:31:25.200104 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-05 01:31:25.200114 | orchestrator | Sunday 05 April 2026 01:30:20 +0000 (0:00:00.191) 0:02:15.259 ********** 2026-04-05 01:31:25.200124 | orchestrator | skipping: [localhost] 2026-04-05 01:31:25.200133 | orchestrator | 2026-04-05 01:31:25.200143 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-05 01:31:25.200153 | orchestrator | Sunday 05 April 2026 01:30:20 +0000 (0:00:00.163) 0:02:15.423 ********** 2026-04-05 01:31:25.200162 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 01:31:25.200172 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 01:31:25.200182 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 01:31:25.200191 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 01:31:25.200201 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 01:31:25.200217 | orchestrator | 2026-04-05 01:31:25.200227 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-05 01:31:25.200236 | orchestrator | Sunday 05 April 2026 01:30:25 +0000 (0:00:04.969) 0:02:20.393 ********** 2026-04-05 01:31:25.200246 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-05 01:31:25.200278 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-05 01:31:25.200288 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-05 01:31:25.200297 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-05 01:31:25.200307 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-04-05 01:31:25.200320 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j120214407402.2844', 'results_file': '/ansible/.ansible_async/j120214407402.2844', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:31:25.200333 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j194835229647.2869', 'results_file': '/ansible/.ansible_async/j194835229647.2869', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:31:25.200343 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j571524081476.2894', 'results_file': '/ansible/.ansible_async/j571524081476.2894', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:31:25.200353 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j850843896023.2919', 'results_file': '/ansible/.ansible_async/j850843896023.2919', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:31:25.200363 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j690472947293.2944', 'results_file': '/ansible/.ansible_async/j690472947293.2944', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:31:25.200373 | orchestrator | 2026-04-05 01:31:25.200383 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-05 01:31:25.200393 | orchestrator | Sunday 05 April 2026 01:31:24 +0000 (0:00:58.356) 0:03:18.749 ********** 2026-04-05 01:31:25.200409 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 01:32:41.250927 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 01:32:41.251071 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 01:32:41.251100 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 01:32:41.251121 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 01:32:41.251142 | orchestrator | 2026-04-05 01:32:41.251163 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-05 01:32:41.251177 | orchestrator | Sunday 05 April 2026 01:31:28 +0000 (0:00:04.805) 0:03:23.555 ********** 2026-04-05 01:32:41.251188 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-05 01:32:41.251203 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j233634683738.3055', 'results_file': '/ansible/.ansible_async/j233634683738.3055', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:32:41.251217 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j408512395504.3080', 'results_file': '/ansible/.ansible_async/j408512395504.3080', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:32:41.251254 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j835451181575.3105', 'results_file': '/ansible/.ansible_async/j835451181575.3105', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:32:41.251266 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j522271648580.3130', 'results_file': '/ansible/.ansible_async/j522271648580.3130', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:32:41.251277 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j971633498387.3155', 'results_file': '/ansible/.ansible_async/j971633498387.3155', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:32:41.251288 | orchestrator | 2026-04-05 01:32:41.251335 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-05 01:32:41.251349 | orchestrator | Sunday 05 April 2026 01:31:38 +0000 (0:00:09.886) 0:03:33.441 ********** 2026-04-05 01:32:41.251360 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 01:32:41.251371 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 01:32:41.251383 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 01:32:41.251394 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 01:32:41.251405 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 01:32:41.251415 | orchestrator | 2026-04-05 01:32:41.251429 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-05 01:32:41.251441 | orchestrator | Sunday 05 April 2026 01:31:43 +0000 (0:00:05.023) 0:03:38.465 ********** 2026-04-05 01:32:41.251454 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-05 01:32:41.251467 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j307243311599.3231', 'results_file': '/ansible/.ansible_async/j307243311599.3231', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:32:41.251499 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j132289069029.3256', 'results_file': '/ansible/.ansible_async/j132289069029.3256', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:32:41.251512 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j456918679934.3282', 'results_file': '/ansible/.ansible_async/j456918679934.3282', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:32:41.251526 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j42515966777.3308', 'results_file': '/ansible/.ansible_async/j42515966777.3308', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:32:41.251565 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j685112638396.3334', 'results_file': '/ansible/.ansible_async/j685112638396.3334', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:32:41.251579 | orchestrator | 2026-04-05 01:32:41.251592 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-05 01:32:41.251604 | orchestrator | Sunday 05 April 2026 01:31:54 +0000 (0:00:10.398) 0:03:48.863 ********** 2026-04-05 01:32:41.251626 | orchestrator | changed: [localhost] 2026-04-05 01:32:41.251641 | orchestrator | 2026-04-05 01:32:41.251653 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-05 01:32:41.251667 | orchestrator | Sunday 05 April 2026 01:32:01 +0000 (0:00:07.394) 0:03:56.258 ********** 2026-04-05 01:32:41.251685 | orchestrator | changed: [localhost] 2026-04-05 01:32:41.251704 | orchestrator | 2026-04-05 01:32:41.251725 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-05 01:32:41.251745 | orchestrator | Sunday 05 April 2026 01:32:15 +0000 (0:00:14.139) 0:04:10.398 ********** 2026-04-05 01:32:41.251766 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 01:32:41.251787 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 01:32:41.251808 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 01:32:41.251829 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 01:32:41.251848 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 01:32:41.251868 | orchestrator | 2026-04-05 01:32:41.251888 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-05 01:32:41.251906 | orchestrator | Sunday 05 April 2026 01:32:40 +0000 (0:00:25.122) 0:04:35.520 ********** 2026-04-05 01:32:41.251926 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-05 01:32:41.251947 | orchestrator |  "msg": "test: 192.168.112.185" 2026-04-05 01:32:41.251968 | orchestrator | } 2026-04-05 01:32:41.251988 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-05 01:32:41.252000 | orchestrator |  "msg": "test-1: 192.168.112.189" 2026-04-05 01:32:41.252011 | orchestrator | } 2026-04-05 01:32:41.252021 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-05 01:32:41.252032 | orchestrator |  "msg": "test-2: 192.168.112.113" 2026-04-05 01:32:41.252043 | orchestrator | } 2026-04-05 01:32:41.252053 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-05 01:32:41.252064 | orchestrator |  "msg": "test-3: 192.168.112.132" 2026-04-05 01:32:41.252075 | orchestrator | } 2026-04-05 01:32:41.252085 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-05 01:32:41.252096 | orchestrator |  "msg": "test-4: 192.168.112.130" 2026-04-05 01:32:41.252107 | orchestrator | } 2026-04-05 01:32:41.252117 | orchestrator | 2026-04-05 01:32:41.252128 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:32:41.252139 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 01:32:41.252151 | orchestrator | 2026-04-05 01:32:41.252162 | orchestrator | 2026-04-05 01:32:41.252173 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:32:41.252184 | orchestrator | Sunday 05 April 2026 01:32:41 +0000 (0:00:00.127) 0:04:35.647 ********** 2026-04-05 01:32:41.252194 | orchestrator | =============================================================================== 2026-04-05 01:32:41.252205 | orchestrator | Wait for instance creation to complete --------------------------------- 58.36s 2026-04-05 01:32:41.252216 | orchestrator | Create test routers ---------------------------------------------------- 34.97s 2026-04-05 01:32:41.252226 | orchestrator | Create floating ip addresses ------------------------------------------- 25.12s 2026-04-05 01:32:41.252237 | orchestrator | Create test subnets ---------------------------------------------------- 17.16s 2026-04-05 01:32:41.252247 | orchestrator | Create test networks --------------------------------------------------- 14.90s 2026-04-05 01:32:41.252258 | orchestrator | Attach test volume ----------------------------------------------------- 14.14s 2026-04-05 01:32:41.252269 | orchestrator | Add member roles to user test ------------------------------------------ 12.76s 2026-04-05 01:32:41.252293 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.40s 2026-04-05 01:32:41.252332 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.89s 2026-04-05 01:32:41.252354 | orchestrator | Create test volume ------------------------------------------------------ 7.39s 2026-04-05 01:32:41.252365 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.76s 2026-04-05 01:32:41.252375 | orchestrator | Create ssh security group ----------------------------------------------- 5.17s 2026-04-05 01:32:41.252386 | orchestrator | Add tag to instances ---------------------------------------------------- 5.02s 2026-04-05 01:32:41.252397 | orchestrator | Create test instances --------------------------------------------------- 4.97s 2026-04-05 01:32:41.252407 | orchestrator | Add metadata to instances ----------------------------------------------- 4.81s 2026-04-05 01:32:41.252418 | orchestrator | Create test-admin user -------------------------------------------------- 4.65s 2026-04-05 01:32:41.252429 | orchestrator | Create test server group ------------------------------------------------ 4.63s 2026-04-05 01:32:41.252439 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.60s 2026-04-05 01:32:41.252450 | orchestrator | Create test user -------------------------------------------------------- 4.48s 2026-04-05 01:32:41.252461 | orchestrator | Create test project ----------------------------------------------------- 4.39s 2026-04-05 01:32:41.450799 | orchestrator | + server_list 2026-04-05 01:32:41.450900 | orchestrator | + openstack --os-cloud test server list 2026-04-05 01:32:45.106724 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 01:32:45.106816 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-05 01:32:45.106829 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 01:32:45.106842 | orchestrator | | ecb1f020-133a-4cd3-a5a2-4869c73071c2 | test-3 | ACTIVE | test-2=192.168.112.132, 192.168.201.197 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 01:32:45.106862 | orchestrator | | a4a227eb-4c47-4366-b733-a94348f3f8b9 | test-4 | ACTIVE | test-3=192.168.112.130, 192.168.202.242 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 01:32:45.106879 | orchestrator | | 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 | test-1 | ACTIVE | test-1=192.168.112.189, 192.168.200.41 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 01:32:45.106898 | orchestrator | | d94395d9-a06e-4f52-bf61-5d8ecbf752b6 | test-2 | ACTIVE | test-2=192.168.112.113, 192.168.201.61 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 01:32:45.106915 | orchestrator | | 38c7f575-7855-416d-a20d-6f0a41a1c9eb | test | ACTIVE | test-1=192.168.112.185, 192.168.200.199 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 01:32:45.106933 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 01:32:45.384484 | orchestrator | + openstack --os-cloud test server show test 2026-04-05 01:32:48.954094 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:48.954214 | orchestrator | | Field | Value | 2026-04-05 01:32:48.954231 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:48.954262 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 01:32:48.954274 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 01:32:48.954286 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 01:32:48.954329 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-05 01:32:48.954343 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 01:32:48.954395 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 01:32:48.954426 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 01:32:48.954438 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 01:32:48.954450 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 01:32:48.954470 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 01:32:48.954481 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 01:32:48.954493 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 01:32:48.954504 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 01:32:48.954520 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 01:32:48.954532 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 01:32:48.954544 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T01:30:59.000000 | 2026-04-05 01:32:48.954565 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 01:32:48.954580 | orchestrator | | accessIPv4 | | 2026-04-05 01:32:48.954593 | orchestrator | | accessIPv6 | | 2026-04-05 01:32:48.954612 | orchestrator | | addresses | test-1=192.168.112.185, 192.168.200.199 | 2026-04-05 01:32:48.954625 | orchestrator | | config_drive | | 2026-04-05 01:32:48.954638 | orchestrator | | created | 2026-04-05T01:30:30Z | 2026-04-05 01:32:48.954651 | orchestrator | | description | None | 2026-04-05 01:32:48.954668 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 01:32:48.954681 | orchestrator | | hostId | 3d2cbd79a8659f807b33a77491158e3fcab713c886b51820e4524878 | 2026-04-05 01:32:48.954695 | orchestrator | | host_status | None | 2026-04-05 01:32:48.954715 | orchestrator | | id | 38c7f575-7855-416d-a20d-6f0a41a1c9eb | 2026-04-05 01:32:48.954729 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 01:32:48.954749 | orchestrator | | key_name | test | 2026-04-05 01:32:48.954761 | orchestrator | | locked | False | 2026-04-05 01:32:48.954772 | orchestrator | | locked_reason | None | 2026-04-05 01:32:48.954783 | orchestrator | | name | test | 2026-04-05 01:32:48.954794 | orchestrator | | pinned_availability_zone | None | 2026-04-05 01:32:48.954806 | orchestrator | | progress | 0 | 2026-04-05 01:32:48.954824 | orchestrator | | project_id | ea4ad60d481241fbb96365ffb8e4f0cd | 2026-04-05 01:32:48.954836 | orchestrator | | properties | hostname='test' | 2026-04-05 01:32:48.954854 | orchestrator | | security_groups | name='icmp' | 2026-04-05 01:32:48.954881 | orchestrator | | | name='ssh' | 2026-04-05 01:32:48.954893 | orchestrator | | server_groups | None | 2026-04-05 01:32:48.954904 | orchestrator | | status | ACTIVE | 2026-04-05 01:32:48.954915 | orchestrator | | tags | test | 2026-04-05 01:32:48.954926 | orchestrator | | trusted_image_certificates | None | 2026-04-05 01:32:48.954937 | orchestrator | | updated | 2026-04-05T01:31:30Z | 2026-04-05 01:32:48.954958 | orchestrator | | user_id | c303a6a03fd24e5997b12c13a6c9423f | 2026-04-05 01:32:48.954969 | orchestrator | | volumes_attached | delete_on_termination='True', id='182ea25f-61eb-4184-916d-d17ce3ec0b1d' | 2026-04-05 01:32:48.954981 | orchestrator | | | delete_on_termination='False', id='fb69e80d-22a3-46b3-b0e0-fbbc8b0dd31b' | 2026-04-05 01:32:48.958726 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:49.251361 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-05 01:32:52.345752 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:52.345831 | orchestrator | | Field | Value | 2026-04-05 01:32:52.345841 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:52.345848 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 01:32:52.345855 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 01:32:52.345861 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 01:32:52.345881 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-05 01:32:52.345888 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 01:32:52.345894 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 01:32:52.345930 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 01:32:52.345937 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 01:32:52.345944 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 01:32:52.345950 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 01:32:52.345957 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 01:32:52.345963 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 01:32:52.345970 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 01:32:52.345980 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 01:32:52.345986 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 01:32:52.345997 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T01:30:59.000000 | 2026-04-05 01:32:52.346008 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 01:32:52.346057 | orchestrator | | accessIPv4 | | 2026-04-05 01:32:52.346065 | orchestrator | | accessIPv6 | | 2026-04-05 01:32:52.346071 | orchestrator | | addresses | test-1=192.168.112.189, 192.168.200.41 | 2026-04-05 01:32:52.346078 | orchestrator | | config_drive | | 2026-04-05 01:32:52.346084 | orchestrator | | created | 2026-04-05T01:30:31Z | 2026-04-05 01:32:52.346095 | orchestrator | | description | None | 2026-04-05 01:32:52.346101 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 01:32:52.346114 | orchestrator | | hostId | 3d2cbd79a8659f807b33a77491158e3fcab713c886b51820e4524878 | 2026-04-05 01:32:52.346121 | orchestrator | | host_status | None | 2026-04-05 01:32:52.346134 | orchestrator | | id | 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 | 2026-04-05 01:32:52.346141 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 01:32:52.346147 | orchestrator | | key_name | test | 2026-04-05 01:32:52.346154 | orchestrator | | locked | False | 2026-04-05 01:32:52.346160 | orchestrator | | locked_reason | None | 2026-04-05 01:32:52.346167 | orchestrator | | name | test-1 | 2026-04-05 01:32:52.346176 | orchestrator | | pinned_availability_zone | None | 2026-04-05 01:32:52.346188 | orchestrator | | progress | 0 | 2026-04-05 01:32:52.346195 | orchestrator | | project_id | ea4ad60d481241fbb96365ffb8e4f0cd | 2026-04-05 01:32:52.346201 | orchestrator | | properties | hostname='test-1' | 2026-04-05 01:32:52.346212 | orchestrator | | security_groups | name='icmp' | 2026-04-05 01:32:52.346219 | orchestrator | | | name='ssh' | 2026-04-05 01:32:52.346225 | orchestrator | | server_groups | None | 2026-04-05 01:32:52.346232 | orchestrator | | status | ACTIVE | 2026-04-05 01:32:52.346238 | orchestrator | | tags | test | 2026-04-05 01:32:52.346244 | orchestrator | | trusted_image_certificates | None | 2026-04-05 01:32:52.346255 | orchestrator | | updated | 2026-04-05T01:31:30Z | 2026-04-05 01:32:52.346267 | orchestrator | | user_id | c303a6a03fd24e5997b12c13a6c9423f | 2026-04-05 01:32:52.346274 | orchestrator | | volumes_attached | delete_on_termination='True', id='f450d78b-07d2-4188-a811-5d275e653b8d' | 2026-04-05 01:32:52.348904 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:52.610805 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-05 01:32:55.672765 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:55.672867 | orchestrator | | Field | Value | 2026-04-05 01:32:55.672877 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:55.672885 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 01:32:55.672892 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 01:32:55.672898 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 01:32:55.672925 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-05 01:32:55.672969 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 01:32:55.672977 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 01:32:55.672997 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 01:32:55.673005 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 01:32:55.673013 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 01:32:55.673021 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 01:32:55.673029 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 01:32:55.673036 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 01:32:55.673049 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 01:32:55.673059 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 01:32:55.673067 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 01:32:55.673074 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T01:31:01.000000 | 2026-04-05 01:32:55.673085 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 01:32:55.673093 | orchestrator | | accessIPv4 | | 2026-04-05 01:32:55.673100 | orchestrator | | accessIPv6 | | 2026-04-05 01:32:55.673107 | orchestrator | | addresses | test-2=192.168.112.113, 192.168.201.61 | 2026-04-05 01:32:55.673114 | orchestrator | | config_drive | | 2026-04-05 01:32:55.673125 | orchestrator | | created | 2026-04-05T01:30:31Z | 2026-04-05 01:32:55.673133 | orchestrator | | description | None | 2026-04-05 01:32:55.673142 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 01:32:55.673150 | orchestrator | | hostId | ed98d9bdb587757e11362a6b11bcc7a08ae2209f64eb9e0b19f20326 | 2026-04-05 01:32:55.673157 | orchestrator | | host_status | None | 2026-04-05 01:32:55.673169 | orchestrator | | id | d94395d9-a06e-4f52-bf61-5d8ecbf752b6 | 2026-04-05 01:32:55.673176 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 01:32:55.673183 | orchestrator | | key_name | test | 2026-04-05 01:32:55.673190 | orchestrator | | locked | False | 2026-04-05 01:32:55.673202 | orchestrator | | locked_reason | None | 2026-04-05 01:32:55.673209 | orchestrator | | name | test-2 | 2026-04-05 01:32:55.673216 | orchestrator | | pinned_availability_zone | None | 2026-04-05 01:32:55.673226 | orchestrator | | progress | 0 | 2026-04-05 01:32:55.673234 | orchestrator | | project_id | ea4ad60d481241fbb96365ffb8e4f0cd | 2026-04-05 01:32:55.673241 | orchestrator | | properties | hostname='test-2' | 2026-04-05 01:32:55.673253 | orchestrator | | security_groups | name='icmp' | 2026-04-05 01:32:55.673260 | orchestrator | | | name='ssh' | 2026-04-05 01:32:55.673267 | orchestrator | | server_groups | None | 2026-04-05 01:32:55.673274 | orchestrator | | status | ACTIVE | 2026-04-05 01:32:55.673285 | orchestrator | | tags | test | 2026-04-05 01:32:55.673293 | orchestrator | | trusted_image_certificates | None | 2026-04-05 01:32:55.673300 | orchestrator | | updated | 2026-04-05T01:31:31Z | 2026-04-05 01:32:55.673351 | orchestrator | | user_id | c303a6a03fd24e5997b12c13a6c9423f | 2026-04-05 01:32:55.673359 | orchestrator | | volumes_attached | delete_on_termination='True', id='087ac24b-c99d-4410-802f-87378838ea55' | 2026-04-05 01:32:55.677913 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:55.950627 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-05 01:32:59.000090 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:59.000200 | orchestrator | | Field | Value | 2026-04-05 01:32:59.000211 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:59.000237 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 01:32:59.000245 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 01:32:59.000253 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 01:32:59.000260 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-05 01:32:59.000268 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 01:32:59.000276 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 01:32:59.000297 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 01:32:59.000305 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 01:32:59.000350 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 01:32:59.000374 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 01:32:59.000382 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 01:32:59.000748 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 01:32:59.000773 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 01:32:59.000783 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 01:32:59.000792 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 01:32:59.000800 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T01:31:01.000000 | 2026-04-05 01:32:59.000820 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 01:32:59.000829 | orchestrator | | accessIPv4 | | 2026-04-05 01:32:59.000847 | orchestrator | | accessIPv6 | | 2026-04-05 01:32:59.000856 | orchestrator | | addresses | test-2=192.168.112.132, 192.168.201.197 | 2026-04-05 01:32:59.000868 | orchestrator | | config_drive | | 2026-04-05 01:32:59.000876 | orchestrator | | created | 2026-04-05T01:30:34Z | 2026-04-05 01:32:59.000883 | orchestrator | | description | None | 2026-04-05 01:32:59.000890 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 01:32:59.000898 | orchestrator | | hostId | ed98d9bdb587757e11362a6b11bcc7a08ae2209f64eb9e0b19f20326 | 2026-04-05 01:32:59.000905 | orchestrator | | host_status | None | 2026-04-05 01:32:59.000919 | orchestrator | | id | ecb1f020-133a-4cd3-a5a2-4869c73071c2 | 2026-04-05 01:32:59.000932 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 01:32:59.000939 | orchestrator | | key_name | test | 2026-04-05 01:32:59.000947 | orchestrator | | locked | False | 2026-04-05 01:32:59.000958 | orchestrator | | locked_reason | None | 2026-04-05 01:32:59.000965 | orchestrator | | name | test-3 | 2026-04-05 01:32:59.000973 | orchestrator | | pinned_availability_zone | None | 2026-04-05 01:32:59.000980 | orchestrator | | progress | 0 | 2026-04-05 01:32:59.000988 | orchestrator | | project_id | ea4ad60d481241fbb96365ffb8e4f0cd | 2026-04-05 01:32:59.000995 | orchestrator | | properties | hostname='test-3' | 2026-04-05 01:32:59.001008 | orchestrator | | security_groups | name='icmp' | 2026-04-05 01:32:59.001021 | orchestrator | | | name='ssh' | 2026-04-05 01:32:59.001029 | orchestrator | | server_groups | None | 2026-04-05 01:32:59.001036 | orchestrator | | status | ACTIVE | 2026-04-05 01:32:59.001047 | orchestrator | | tags | test | 2026-04-05 01:32:59.001055 | orchestrator | | trusted_image_certificates | None | 2026-04-05 01:32:59.001063 | orchestrator | | updated | 2026-04-05T01:31:32Z | 2026-04-05 01:32:59.001070 | orchestrator | | user_id | c303a6a03fd24e5997b12c13a6c9423f | 2026-04-05 01:32:59.001077 | orchestrator | | volumes_attached | delete_on_termination='True', id='6a370938-8da3-44b1-9a09-b3afe081e227' | 2026-04-05 01:32:59.006655 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:32:59.289891 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-05 01:33:02.292851 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:33:02.292961 | orchestrator | | Field | Value | 2026-04-05 01:33:02.292977 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:33:02.292990 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 01:33:02.293019 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 01:33:02.293031 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 01:33:02.293044 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-05 01:33:02.293064 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 01:33:02.293085 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 01:33:02.293153 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 01:33:02.293178 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 01:33:02.293198 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 01:33:02.293219 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 01:33:02.293239 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 01:33:02.293259 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 01:33:02.293271 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 01:33:02.293282 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 01:33:02.293294 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 01:33:02.293434 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T01:31:01.000000 | 2026-04-05 01:33:02.293471 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 01:33:02.293493 | orchestrator | | accessIPv4 | | 2026-04-05 01:33:02.293514 | orchestrator | | accessIPv6 | | 2026-04-05 01:33:02.293533 | orchestrator | | addresses | test-3=192.168.112.130, 192.168.202.242 | 2026-04-05 01:33:02.293554 | orchestrator | | config_drive | | 2026-04-05 01:33:02.293585 | orchestrator | | created | 2026-04-05T01:30:33Z | 2026-04-05 01:33:02.293607 | orchestrator | | description | None | 2026-04-05 01:33:02.293628 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 01:33:02.293662 | orchestrator | | hostId | ed98d9bdb587757e11362a6b11bcc7a08ae2209f64eb9e0b19f20326 | 2026-04-05 01:33:02.293682 | orchestrator | | host_status | None | 2026-04-05 01:33:02.293712 | orchestrator | | id | a4a227eb-4c47-4366-b733-a94348f3f8b9 | 2026-04-05 01:33:02.293730 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 01:33:02.293751 | orchestrator | | key_name | test | 2026-04-05 01:33:02.293772 | orchestrator | | locked | False | 2026-04-05 01:33:02.293792 | orchestrator | | locked_reason | None | 2026-04-05 01:33:02.293811 | orchestrator | | name | test-4 | 2026-04-05 01:33:02.293829 | orchestrator | | pinned_availability_zone | None | 2026-04-05 01:33:02.293850 | orchestrator | | progress | 0 | 2026-04-05 01:33:02.293881 | orchestrator | | project_id | ea4ad60d481241fbb96365ffb8e4f0cd | 2026-04-05 01:33:02.293899 | orchestrator | | properties | hostname='test-4' | 2026-04-05 01:33:02.293920 | orchestrator | | security_groups | name='icmp' | 2026-04-05 01:33:02.294007 | orchestrator | | | name='ssh' | 2026-04-05 01:33:02.294119 | orchestrator | | server_groups | None | 2026-04-05 01:33:02.294132 | orchestrator | | status | ACTIVE | 2026-04-05 01:33:02.294143 | orchestrator | | tags | test | 2026-04-05 01:33:02.294160 | orchestrator | | trusted_image_certificates | None | 2026-04-05 01:33:02.294172 | orchestrator | | updated | 2026-04-05T01:31:33Z | 2026-04-05 01:33:02.294194 | orchestrator | | user_id | c303a6a03fd24e5997b12c13a6c9423f | 2026-04-05 01:33:02.294206 | orchestrator | | volumes_attached | delete_on_termination='True', id='51cf5c8a-1514-4403-9b5a-8639a78c4210' | 2026-04-05 01:33:02.295445 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:33:02.586747 | orchestrator | + server_ping 2026-04-05 01:33:02.588134 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 01:33:02.589083 | orchestrator | ++ tr -d '\r' 2026-04-05 01:33:05.523531 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:33:05.523651 | orchestrator | + ping -c3 192.168.112.132 2026-04-05 01:33:05.538386 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-04-05 01:33:05.538473 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=10.2 ms 2026-04-05 01:33:06.532737 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.59 ms 2026-04-05 01:33:07.534843 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.92 ms 2026-04-05 01:33:07.534972 | orchestrator | 2026-04-05 01:33:07.534992 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-04-05 01:33:07.535005 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:33:07.535017 | orchestrator | rtt min/avg/max/mdev = 1.918/4.914/10.237/3.773 ms 2026-04-05 01:33:07.535029 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:33:07.535041 | orchestrator | + ping -c3 192.168.112.130 2026-04-05 01:33:07.547073 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2026-04-05 01:33:07.547195 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=7.26 ms 2026-04-05 01:33:08.543974 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.13 ms 2026-04-05 01:33:09.545486 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.75 ms 2026-04-05 01:33:09.545613 | orchestrator | 2026-04-05 01:33:09.545664 | orchestrator | --- 192.168.112.130 ping statistics --- 2026-04-05 01:33:09.545688 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:33:09.545707 | orchestrator | rtt min/avg/max/mdev = 1.754/3.716/7.261/2.511 ms 2026-04-05 01:33:09.545727 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:33:09.545894 | orchestrator | + ping -c3 192.168.112.113 2026-04-05 01:33:09.556162 | orchestrator | PING 192.168.112.113 (192.168.112.113) 56(84) bytes of data. 2026-04-05 01:33:09.556254 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=1 ttl=63 time=5.25 ms 2026-04-05 01:33:10.554457 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=2 ttl=63 time=1.59 ms 2026-04-05 01:33:11.556424 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=3 ttl=63 time=1.75 ms 2026-04-05 01:33:11.556530 | orchestrator | 2026-04-05 01:33:11.556546 | orchestrator | --- 192.168.112.113 ping statistics --- 2026-04-05 01:33:11.556589 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:33:11.556602 | orchestrator | rtt min/avg/max/mdev = 1.587/2.864/5.254/1.691 ms 2026-04-05 01:33:11.557471 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:33:11.557584 | orchestrator | + ping -c3 192.168.112.185 2026-04-05 01:33:11.569571 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-04-05 01:33:11.569667 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=6.95 ms 2026-04-05 01:33:12.564722 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.21 ms 2026-04-05 01:33:13.566294 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.95 ms 2026-04-05 01:33:13.566443 | orchestrator | 2026-04-05 01:33:13.566460 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-04-05 01:33:13.566474 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-05 01:33:13.566486 | orchestrator | rtt min/avg/max/mdev = 1.952/3.702/6.951/2.299 ms 2026-04-05 01:33:13.566681 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:33:13.566702 | orchestrator | + ping -c3 192.168.112.189 2026-04-05 01:33:13.576678 | orchestrator | PING 192.168.112.189 (192.168.112.189) 56(84) bytes of data. 2026-04-05 01:33:13.576792 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=1 ttl=63 time=5.44 ms 2026-04-05 01:33:14.574821 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=2 ttl=63 time=1.84 ms 2026-04-05 01:33:15.576551 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=3 ttl=63 time=1.65 ms 2026-04-05 01:33:15.576656 | orchestrator | 2026-04-05 01:33:15.576673 | orchestrator | --- 192.168.112.189 ping statistics --- 2026-04-05 01:33:15.576688 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:33:15.576699 | orchestrator | rtt min/avg/max/mdev = 1.652/2.976/5.440/1.743 ms 2026-04-05 01:33:15.576723 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 01:33:15.576737 | orchestrator | + compute_list 2026-04-05 01:33:15.576748 | orchestrator | + osism manage compute list testbed-node-3 2026-04-05 01:33:17.321055 | orchestrator | 2026-04-05 01:33:17 | ERROR  | Unable to get ansible vault password 2026-04-05 01:33:17.321151 | orchestrator | 2026-04-05 01:33:17 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:33:17.321163 | orchestrator | 2026-04-05 01:33:17 | ERROR  | Dropping encrypted entries 2026-04-05 01:33:20.650182 | orchestrator | +------+--------+----------+ 2026-04-05 01:33:20.650297 | orchestrator | | ID | Name | Status | 2026-04-05 01:33:20.650314 | orchestrator | |------+--------+----------| 2026-04-05 01:33:20.650391 | orchestrator | +------+--------+----------+ 2026-04-05 01:33:20.987753 | orchestrator | + osism manage compute list testbed-node-4 2026-04-05 01:33:22.658258 | orchestrator | 2026-04-05 01:33:22 | ERROR  | Unable to get ansible vault password 2026-04-05 01:33:22.658469 | orchestrator | 2026-04-05 01:33:22 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:33:22.658490 | orchestrator | 2026-04-05 01:33:22 | ERROR  | Dropping encrypted entries 2026-04-05 01:33:24.426983 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:33:24.427091 | orchestrator | | ID | Name | Status | 2026-04-05 01:33:24.427106 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:33:24.427117 | orchestrator | | ecb1f020-133a-4cd3-a5a2-4869c73071c2 | test-3 | ACTIVE | 2026-04-05 01:33:24.427128 | orchestrator | | a4a227eb-4c47-4366-b733-a94348f3f8b9 | test-4 | ACTIVE | 2026-04-05 01:33:24.427140 | orchestrator | | d94395d9-a06e-4f52-bf61-5d8ecbf752b6 | test-2 | ACTIVE | 2026-04-05 01:33:24.427151 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:33:24.812651 | orchestrator | + osism manage compute list testbed-node-5 2026-04-05 01:33:26.465127 | orchestrator | 2026-04-05 01:33:26 | ERROR  | Unable to get ansible vault password 2026-04-05 01:33:26.465281 | orchestrator | 2026-04-05 01:33:26 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:33:26.465305 | orchestrator | 2026-04-05 01:33:26 | ERROR  | Dropping encrypted entries 2026-04-05 01:33:28.136822 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:33:28.136917 | orchestrator | | ID | Name | Status | 2026-04-05 01:33:28.136930 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:33:28.136942 | orchestrator | | 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 | test-1 | ACTIVE | 2026-04-05 01:33:28.136953 | orchestrator | | 38c7f575-7855-416d-a20d-6f0a41a1c9eb | test | ACTIVE | 2026-04-05 01:33:28.136964 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:33:28.486519 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-05 01:33:30.114696 | orchestrator | 2026-04-05 01:33:30 | ERROR  | Unable to get ansible vault password 2026-04-05 01:33:30.114809 | orchestrator | 2026-04-05 01:33:30 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:33:30.114826 | orchestrator | 2026-04-05 01:33:30 | ERROR  | Dropping encrypted entries 2026-04-05 01:33:31.804363 | orchestrator | 2026-04-05 01:33:31 | INFO  | Live migrating server ecb1f020-133a-4cd3-a5a2-4869c73071c2 2026-04-05 01:33:45.995294 | orchestrator | 2026-04-05 01:33:45 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:33:48.391482 | orchestrator | 2026-04-05 01:33:48 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:33:50.767394 | orchestrator | 2026-04-05 01:33:50 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:33:53.172893 | orchestrator | 2026-04-05 01:33:53 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:33:55.607605 | orchestrator | 2026-04-05 01:33:55 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:33:58.035820 | orchestrator | 2026-04-05 01:33:58 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:34:00.332319 | orchestrator | 2026-04-05 01:34:00 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:34:02.714935 | orchestrator | 2026-04-05 01:34:02 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:34:05.029291 | orchestrator | 2026-04-05 01:34:05 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:34:07.446266 | orchestrator | 2026-04-05 01:34:07 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:34:09.814947 | orchestrator | 2026-04-05 01:34:09 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:34:12.125603 | orchestrator | 2026-04-05 01:34:12 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) completed with status ACTIVE 2026-04-05 01:34:12.125760 | orchestrator | 2026-04-05 01:34:12 | INFO  | Live migrating server a4a227eb-4c47-4366-b733-a94348f3f8b9 2026-04-05 01:34:25.908674 | orchestrator | 2026-04-05 01:34:25 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:34:28.286290 | orchestrator | 2026-04-05 01:34:28 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:34:30.639830 | orchestrator | 2026-04-05 01:34:30 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:34:33.010351 | orchestrator | 2026-04-05 01:34:33 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:34:35.334561 | orchestrator | 2026-04-05 01:34:35 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:34:37.717721 | orchestrator | 2026-04-05 01:34:37 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:34:40.026156 | orchestrator | 2026-04-05 01:34:40 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:34:42.383608 | orchestrator | 2026-04-05 01:34:42 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:34:44.662449 | orchestrator | 2026-04-05 01:34:44 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:34:47.004122 | orchestrator | 2026-04-05 01:34:47 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) completed with status ACTIVE 2026-04-05 01:34:47.004249 | orchestrator | 2026-04-05 01:34:47 | INFO  | Live migrating server d94395d9-a06e-4f52-bf61-5d8ecbf752b6 2026-04-05 01:34:59.388428 | orchestrator | 2026-04-05 01:34:59 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:35:01.743284 | orchestrator | 2026-04-05 01:35:01 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:35:04.087083 | orchestrator | 2026-04-05 01:35:04 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:35:06.360603 | orchestrator | 2026-04-05 01:35:06 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:35:08.745216 | orchestrator | 2026-04-05 01:35:08 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:35:11.193054 | orchestrator | 2026-04-05 01:35:11 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:35:13.561322 | orchestrator | 2026-04-05 01:35:13 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:35:15.850683 | orchestrator | 2026-04-05 01:35:15 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:35:18.245068 | orchestrator | 2026-04-05 01:35:18 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) completed with status ACTIVE 2026-04-05 01:35:18.583171 | orchestrator | + compute_list 2026-04-05 01:35:18.583292 | orchestrator | + osism manage compute list testbed-node-3 2026-04-05 01:35:20.215657 | orchestrator | 2026-04-05 01:35:20 | ERROR  | Unable to get ansible vault password 2026-04-05 01:35:20.215761 | orchestrator | 2026-04-05 01:35:20 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:35:20.215778 | orchestrator | 2026-04-05 01:35:20 | ERROR  | Dropping encrypted entries 2026-04-05 01:35:21.821549 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:35:21.821654 | orchestrator | | ID | Name | Status | 2026-04-05 01:35:21.821669 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:35:21.821681 | orchestrator | | ecb1f020-133a-4cd3-a5a2-4869c73071c2 | test-3 | ACTIVE | 2026-04-05 01:35:21.821724 | orchestrator | | a4a227eb-4c47-4366-b733-a94348f3f8b9 | test-4 | ACTIVE | 2026-04-05 01:35:21.821735 | orchestrator | | d94395d9-a06e-4f52-bf61-5d8ecbf752b6 | test-2 | ACTIVE | 2026-04-05 01:35:21.821747 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:35:22.165379 | orchestrator | + osism manage compute list testbed-node-4 2026-04-05 01:35:23.871177 | orchestrator | 2026-04-05 01:35:23 | ERROR  | Unable to get ansible vault password 2026-04-05 01:35:23.871285 | orchestrator | 2026-04-05 01:35:23 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:35:23.871302 | orchestrator | 2026-04-05 01:35:23 | ERROR  | Dropping encrypted entries 2026-04-05 01:35:25.123818 | orchestrator | +------+--------+----------+ 2026-04-05 01:35:25.123937 | orchestrator | | ID | Name | Status | 2026-04-05 01:35:25.123967 | orchestrator | |------+--------+----------| 2026-04-05 01:35:25.123987 | orchestrator | +------+--------+----------+ 2026-04-05 01:35:25.462298 | orchestrator | + osism manage compute list testbed-node-5 2026-04-05 01:35:27.171677 | orchestrator | 2026-04-05 01:35:27 | ERROR  | Unable to get ansible vault password 2026-04-05 01:35:27.171804 | orchestrator | 2026-04-05 01:35:27 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:35:27.171834 | orchestrator | 2026-04-05 01:35:27 | ERROR  | Dropping encrypted entries 2026-04-05 01:35:28.846882 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:35:28.847030 | orchestrator | | ID | Name | Status | 2026-04-05 01:35:28.847056 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:35:28.847077 | orchestrator | | 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 | test-1 | ACTIVE | 2026-04-05 01:35:28.847098 | orchestrator | | 38c7f575-7855-416d-a20d-6f0a41a1c9eb | test | ACTIVE | 2026-04-05 01:35:28.847120 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:35:29.182571 | orchestrator | + server_ping 2026-04-05 01:35:29.184103 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 01:35:29.184298 | orchestrator | ++ tr -d '\r' 2026-04-05 01:35:32.004626 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:35:32.004719 | orchestrator | + ping -c3 192.168.112.132 2026-04-05 01:35:32.017856 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-04-05 01:35:32.017950 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=8.53 ms 2026-04-05 01:35:33.013918 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.64 ms 2026-04-05 01:35:34.015674 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.90 ms 2026-04-05 01:35:34.015767 | orchestrator | 2026-04-05 01:35:34.015780 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-04-05 01:35:34.015791 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:35:34.015800 | orchestrator | rtt min/avg/max/mdev = 1.904/4.358/8.533/2.967 ms 2026-04-05 01:35:34.016239 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:35:34.016265 | orchestrator | + ping -c3 192.168.112.130 2026-04-05 01:35:34.027319 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2026-04-05 01:35:34.027448 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=7.00 ms 2026-04-05 01:35:35.024078 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.44 ms 2026-04-05 01:35:36.025871 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.74 ms 2026-04-05 01:35:36.025982 | orchestrator | 2026-04-05 01:35:36.025995 | orchestrator | --- 192.168.112.130 ping statistics --- 2026-04-05 01:35:36.026005 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:35:36.026065 | orchestrator | rtt min/avg/max/mdev = 1.735/3.722/6.997/2.333 ms 2026-04-05 01:35:36.026076 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:35:36.026111 | orchestrator | + ping -c3 192.168.112.113 2026-04-05 01:35:36.035842 | orchestrator | PING 192.168.112.113 (192.168.112.113) 56(84) bytes of data. 2026-04-05 01:35:36.035908 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=1 ttl=63 time=5.23 ms 2026-04-05 01:35:37.034489 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=2 ttl=63 time=2.14 ms 2026-04-05 01:35:38.036185 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=3 ttl=63 time=1.65 ms 2026-04-05 01:35:38.036314 | orchestrator | 2026-04-05 01:35:38.036342 | orchestrator | --- 192.168.112.113 ping statistics --- 2026-04-05 01:35:38.036361 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:35:38.036379 | orchestrator | rtt min/avg/max/mdev = 1.649/3.004/5.225/1.582 ms 2026-04-05 01:35:38.036482 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:35:38.036507 | orchestrator | + ping -c3 192.168.112.185 2026-04-05 01:35:38.046784 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-04-05 01:35:38.046864 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=5.70 ms 2026-04-05 01:35:39.045614 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.31 ms 2026-04-05 01:35:40.046352 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.69 ms 2026-04-05 01:35:40.046448 | orchestrator | 2026-04-05 01:35:40.046456 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-04-05 01:35:40.046463 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:35:40.046467 | orchestrator | rtt min/avg/max/mdev = 1.685/3.230/5.698/1.763 ms 2026-04-05 01:35:40.047441 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:35:40.047518 | orchestrator | + ping -c3 192.168.112.189 2026-04-05 01:35:40.057923 | orchestrator | PING 192.168.112.189 (192.168.112.189) 56(84) bytes of data. 2026-04-05 01:35:40.057999 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=1 ttl=63 time=5.22 ms 2026-04-05 01:35:41.056898 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=2 ttl=63 time=2.37 ms 2026-04-05 01:35:42.057074 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=3 ttl=63 time=1.54 ms 2026-04-05 01:35:42.057179 | orchestrator | 2026-04-05 01:35:42.057195 | orchestrator | --- 192.168.112.189 ping statistics --- 2026-04-05 01:35:42.057208 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-05 01:35:42.057220 | orchestrator | rtt min/avg/max/mdev = 1.539/3.041/5.217/1.575 ms 2026-04-05 01:35:42.057232 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-05 01:35:43.690818 | orchestrator | 2026-04-05 01:35:43 | ERROR  | Unable to get ansible vault password 2026-04-05 01:35:43.690925 | orchestrator | 2026-04-05 01:35:43 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:35:43.690942 | orchestrator | 2026-04-05 01:35:43 | ERROR  | Dropping encrypted entries 2026-04-05 01:35:45.280149 | orchestrator | 2026-04-05 01:35:45 | INFO  | Live migrating server 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 2026-04-05 01:35:56.695588 | orchestrator | 2026-04-05 01:35:56 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:35:59.167087 | orchestrator | 2026-04-05 01:35:59 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:36:01.673133 | orchestrator | 2026-04-05 01:36:01 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:36:04.042186 | orchestrator | 2026-04-05 01:36:04 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:36:06.354675 | orchestrator | 2026-04-05 01:36:06 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:36:08.750936 | orchestrator | 2026-04-05 01:36:08 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:36:11.050262 | orchestrator | 2026-04-05 01:36:11 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:36:13.357808 | orchestrator | 2026-04-05 01:36:13 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:36:15.839177 | orchestrator | 2026-04-05 01:36:15 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:36:18.158238 | orchestrator | 2026-04-05 01:36:18 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) completed with status ACTIVE 2026-04-05 01:36:18.158341 | orchestrator | 2026-04-05 01:36:18 | INFO  | Live migrating server 38c7f575-7855-416d-a20d-6f0a41a1c9eb 2026-04-05 01:36:28.918168 | orchestrator | 2026-04-05 01:36:28 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:36:31.279900 | orchestrator | 2026-04-05 01:36:31 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:36:33.668817 | orchestrator | 2026-04-05 01:36:33 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:36:36.002287 | orchestrator | 2026-04-05 01:36:35 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:36:38.392297 | orchestrator | 2026-04-05 01:36:38 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:36:40.657736 | orchestrator | 2026-04-05 01:36:40 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:36:42.969577 | orchestrator | 2026-04-05 01:36:42 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:36:45.312513 | orchestrator | 2026-04-05 01:36:45 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:36:47.585701 | orchestrator | 2026-04-05 01:36:47 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:36:49.892250 | orchestrator | 2026-04-05 01:36:49 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:36:52.289238 | orchestrator | 2026-04-05 01:36:52 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) completed with status ACTIVE 2026-04-05 01:36:52.639714 | orchestrator | + compute_list 2026-04-05 01:36:52.639817 | orchestrator | + osism manage compute list testbed-node-3 2026-04-05 01:36:54.342207 | orchestrator | 2026-04-05 01:36:54 | ERROR  | Unable to get ansible vault password 2026-04-05 01:36:54.342320 | orchestrator | 2026-04-05 01:36:54 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:36:54.342337 | orchestrator | 2026-04-05 01:36:54 | ERROR  | Dropping encrypted entries 2026-04-05 01:36:56.016046 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:36:56.016152 | orchestrator | | ID | Name | Status | 2026-04-05 01:36:56.016166 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:36:56.016177 | orchestrator | | ecb1f020-133a-4cd3-a5a2-4869c73071c2 | test-3 | ACTIVE | 2026-04-05 01:36:56.016187 | orchestrator | | a4a227eb-4c47-4366-b733-a94348f3f8b9 | test-4 | ACTIVE | 2026-04-05 01:36:56.016197 | orchestrator | | 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 | test-1 | ACTIVE | 2026-04-05 01:36:56.016207 | orchestrator | | d94395d9-a06e-4f52-bf61-5d8ecbf752b6 | test-2 | ACTIVE | 2026-04-05 01:36:56.016218 | orchestrator | | 38c7f575-7855-416d-a20d-6f0a41a1c9eb | test | ACTIVE | 2026-04-05 01:36:56.016254 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:36:56.375800 | orchestrator | + osism manage compute list testbed-node-4 2026-04-05 01:36:58.023164 | orchestrator | 2026-04-05 01:36:58 | ERROR  | Unable to get ansible vault password 2026-04-05 01:36:58.023278 | orchestrator | 2026-04-05 01:36:58 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:36:58.023295 | orchestrator | 2026-04-05 01:36:58 | ERROR  | Dropping encrypted entries 2026-04-05 01:36:59.172854 | orchestrator | +------+--------+----------+ 2026-04-05 01:36:59.172953 | orchestrator | | ID | Name | Status | 2026-04-05 01:36:59.172965 | orchestrator | |------+--------+----------| 2026-04-05 01:36:59.172974 | orchestrator | +------+--------+----------+ 2026-04-05 01:36:59.523872 | orchestrator | + osism manage compute list testbed-node-5 2026-04-05 01:37:01.245613 | orchestrator | 2026-04-05 01:37:01 | ERROR  | Unable to get ansible vault password 2026-04-05 01:37:01.247086 | orchestrator | 2026-04-05 01:37:01 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:37:01.247128 | orchestrator | 2026-04-05 01:37:01 | ERROR  | Dropping encrypted entries 2026-04-05 01:37:02.412664 | orchestrator | +------+--------+----------+ 2026-04-05 01:37:02.412778 | orchestrator | | ID | Name | Status | 2026-04-05 01:37:02.412796 | orchestrator | |------+--------+----------| 2026-04-05 01:37:02.412809 | orchestrator | +------+--------+----------+ 2026-04-05 01:37:02.759516 | orchestrator | + server_ping 2026-04-05 01:37:02.759817 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 01:37:02.760278 | orchestrator | ++ tr -d '\r' 2026-04-05 01:37:05.568809 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:37:05.568906 | orchestrator | + ping -c3 192.168.112.132 2026-04-05 01:37:05.581297 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-04-05 01:37:05.581404 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=7.93 ms 2026-04-05 01:37:06.577687 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.86 ms 2026-04-05 01:37:07.577690 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.86 ms 2026-04-05 01:37:07.577795 | orchestrator | 2026-04-05 01:37:07.577811 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-04-05 01:37:07.577825 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-05 01:37:07.577836 | orchestrator | rtt min/avg/max/mdev = 1.857/4.215/7.930/2.658 ms 2026-04-05 01:37:07.578235 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:37:07.578264 | orchestrator | + ping -c3 192.168.112.130 2026-04-05 01:37:07.593415 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2026-04-05 01:37:07.593526 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=8.44 ms 2026-04-05 01:37:08.590152 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.97 ms 2026-04-05 01:37:09.590752 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.72 ms 2026-04-05 01:37:09.590877 | orchestrator | 2026-04-05 01:37:09.590905 | orchestrator | --- 192.168.112.130 ping statistics --- 2026-04-05 01:37:09.590926 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:37:09.590946 | orchestrator | rtt min/avg/max/mdev = 1.718/4.375/8.442/2.920 ms 2026-04-05 01:37:09.591054 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:37:09.591068 | orchestrator | + ping -c3 192.168.112.113 2026-04-05 01:37:09.602110 | orchestrator | PING 192.168.112.113 (192.168.112.113) 56(84) bytes of data. 2026-04-05 01:37:09.602217 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=1 ttl=63 time=6.90 ms 2026-04-05 01:37:10.598756 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=2 ttl=63 time=2.26 ms 2026-04-05 01:37:11.600008 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=3 ttl=63 time=1.43 ms 2026-04-05 01:37:11.600119 | orchestrator | 2026-04-05 01:37:11.600174 | orchestrator | --- 192.168.112.113 ping statistics --- 2026-04-05 01:37:11.600191 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:37:11.600206 | orchestrator | rtt min/avg/max/mdev = 1.432/3.529/6.899/2.406 ms 2026-04-05 01:37:11.600221 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:37:11.600235 | orchestrator | + ping -c3 192.168.112.185 2026-04-05 01:37:11.613043 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-04-05 01:37:11.613128 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=7.85 ms 2026-04-05 01:37:12.609398 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.91 ms 2026-04-05 01:37:13.610124 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.71 ms 2026-04-05 01:37:13.610232 | orchestrator | 2026-04-05 01:37:13.610252 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-04-05 01:37:13.610262 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-05 01:37:13.610270 | orchestrator | rtt min/avg/max/mdev = 1.709/4.158/7.851/2.657 ms 2026-04-05 01:37:13.610284 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:37:13.610297 | orchestrator | + ping -c3 192.168.112.189 2026-04-05 01:37:13.623255 | orchestrator | PING 192.168.112.189 (192.168.112.189) 56(84) bytes of data. 2026-04-05 01:37:13.623347 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=1 ttl=63 time=6.99 ms 2026-04-05 01:37:14.619870 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=2 ttl=63 time=1.98 ms 2026-04-05 01:37:15.621654 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=3 ttl=63 time=2.07 ms 2026-04-05 01:37:15.621751 | orchestrator | 2026-04-05 01:37:15.621767 | orchestrator | --- 192.168.112.189 ping statistics --- 2026-04-05 01:37:15.621780 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:37:15.621792 | orchestrator | rtt min/avg/max/mdev = 1.984/3.680/6.987/2.338 ms 2026-04-05 01:37:15.621815 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-05 01:37:17.285287 | orchestrator | 2026-04-05 01:37:17 | ERROR  | Unable to get ansible vault password 2026-04-05 01:37:17.285395 | orchestrator | 2026-04-05 01:37:17 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:37:17.285413 | orchestrator | 2026-04-05 01:37:17 | ERROR  | Dropping encrypted entries 2026-04-05 01:37:18.994201 | orchestrator | 2026-04-05 01:37:18 | INFO  | Live migrating server ecb1f020-133a-4cd3-a5a2-4869c73071c2 2026-04-05 01:37:31.329111 | orchestrator | 2026-04-05 01:37:31 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:37:33.757424 | orchestrator | 2026-04-05 01:37:33 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:37:36.152435 | orchestrator | 2026-04-05 01:37:36 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:37:38.511569 | orchestrator | 2026-04-05 01:37:38 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:37:40.798960 | orchestrator | 2026-04-05 01:37:40 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:37:43.095905 | orchestrator | 2026-04-05 01:37:43 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:37:45.373208 | orchestrator | 2026-04-05 01:37:45 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:37:47.689658 | orchestrator | 2026-04-05 01:37:47 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:37:49.999977 | orchestrator | 2026-04-05 01:37:49 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) completed with status ACTIVE 2026-04-05 01:37:50.000111 | orchestrator | 2026-04-05 01:37:49 | INFO  | Live migrating server a4a227eb-4c47-4366-b733-a94348f3f8b9 2026-04-05 01:38:00.362974 | orchestrator | 2026-04-05 01:38:00 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:38:02.762248 | orchestrator | 2026-04-05 01:38:02 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:38:05.052364 | orchestrator | 2026-04-05 01:38:05 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:38:07.447596 | orchestrator | 2026-04-05 01:38:07 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:38:09.786703 | orchestrator | 2026-04-05 01:38:09 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:38:12.147370 | orchestrator | 2026-04-05 01:38:12 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:38:14.483164 | orchestrator | 2026-04-05 01:38:14 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:38:16.845101 | orchestrator | 2026-04-05 01:38:16 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:38:19.215144 | orchestrator | 2026-04-05 01:38:19 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) completed with status ACTIVE 2026-04-05 01:38:19.215278 | orchestrator | 2026-04-05 01:38:19 | INFO  | Live migrating server 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 2026-04-05 01:38:29.563566 | orchestrator | 2026-04-05 01:38:29 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:38:31.920981 | orchestrator | 2026-04-05 01:38:31 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:38:34.301460 | orchestrator | 2026-04-05 01:38:34 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:38:36.599553 | orchestrator | 2026-04-05 01:38:36 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:38:38.951837 | orchestrator | 2026-04-05 01:38:38 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:38:41.256871 | orchestrator | 2026-04-05 01:38:41 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:38:43.642108 | orchestrator | 2026-04-05 01:38:43 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:38:45.891636 | orchestrator | 2026-04-05 01:38:45 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:38:48.274941 | orchestrator | 2026-04-05 01:38:48 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:38:50.668874 | orchestrator | 2026-04-05 01:38:50 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) completed with status ACTIVE 2026-04-05 01:38:50.669024 | orchestrator | 2026-04-05 01:38:50 | INFO  | Live migrating server d94395d9-a06e-4f52-bf61-5d8ecbf752b6 2026-04-05 01:39:01.769909 | orchestrator | 2026-04-05 01:39:01 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:39:04.146278 | orchestrator | 2026-04-05 01:39:04 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:39:06.479090 | orchestrator | 2026-04-05 01:39:06 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:39:08.855899 | orchestrator | 2026-04-05 01:39:08 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:39:11.150385 | orchestrator | 2026-04-05 01:39:11 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:39:13.524501 | orchestrator | 2026-04-05 01:39:13 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:39:15.894383 | orchestrator | 2026-04-05 01:39:15 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:39:18.290515 | orchestrator | 2026-04-05 01:39:18 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:39:20.567995 | orchestrator | 2026-04-05 01:39:20 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) completed with status ACTIVE 2026-04-05 01:39:20.568098 | orchestrator | 2026-04-05 01:39:20 | INFO  | Live migrating server 38c7f575-7855-416d-a20d-6f0a41a1c9eb 2026-04-05 01:39:31.143869 | orchestrator | 2026-04-05 01:39:31 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:39:33.559687 | orchestrator | 2026-04-05 01:39:33 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:39:35.965958 | orchestrator | 2026-04-05 01:39:35 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:39:38.270893 | orchestrator | 2026-04-05 01:39:38 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:39:40.558380 | orchestrator | 2026-04-05 01:39:40 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:39:42.849647 | orchestrator | 2026-04-05 01:39:42 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:39:45.136827 | orchestrator | 2026-04-05 01:39:45 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:39:47.417606 | orchestrator | 2026-04-05 01:39:47 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:39:49.808576 | orchestrator | 2026-04-05 01:39:49 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:39:52.208773 | orchestrator | 2026-04-05 01:39:52 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:39:54.545013 | orchestrator | 2026-04-05 01:39:54 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) completed with status ACTIVE 2026-04-05 01:39:54.902908 | orchestrator | + compute_list 2026-04-05 01:39:54.903008 | orchestrator | + osism manage compute list testbed-node-3 2026-04-05 01:39:56.512162 | orchestrator | 2026-04-05 01:39:56 | ERROR  | Unable to get ansible vault password 2026-04-05 01:39:56.512393 | orchestrator | 2026-04-05 01:39:56 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:39:56.512416 | orchestrator | 2026-04-05 01:39:56 | ERROR  | Dropping encrypted entries 2026-04-05 01:39:57.874628 | orchestrator | +------+--------+----------+ 2026-04-05 01:39:57.874750 | orchestrator | | ID | Name | Status | 2026-04-05 01:39:57.874763 | orchestrator | |------+--------+----------| 2026-04-05 01:39:57.874775 | orchestrator | +------+--------+----------+ 2026-04-05 01:39:58.236895 | orchestrator | + osism manage compute list testbed-node-4 2026-04-05 01:39:59.862830 | orchestrator | 2026-04-05 01:39:59 | ERROR  | Unable to get ansible vault password 2026-04-05 01:39:59.862952 | orchestrator | 2026-04-05 01:39:59 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:39:59.862969 | orchestrator | 2026-04-05 01:39:59 | ERROR  | Dropping encrypted entries 2026-04-05 01:40:01.602281 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:40:01.602394 | orchestrator | | ID | Name | Status | 2026-04-05 01:40:01.602405 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:40:01.602412 | orchestrator | | ecb1f020-133a-4cd3-a5a2-4869c73071c2 | test-3 | ACTIVE | 2026-04-05 01:40:01.602418 | orchestrator | | a4a227eb-4c47-4366-b733-a94348f3f8b9 | test-4 | ACTIVE | 2026-04-05 01:40:01.602425 | orchestrator | | 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 | test-1 | ACTIVE | 2026-04-05 01:40:01.602431 | orchestrator | | d94395d9-a06e-4f52-bf61-5d8ecbf752b6 | test-2 | ACTIVE | 2026-04-05 01:40:01.602437 | orchestrator | | 38c7f575-7855-416d-a20d-6f0a41a1c9eb | test | ACTIVE | 2026-04-05 01:40:01.602444 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:40:01.955845 | orchestrator | + osism manage compute list testbed-node-5 2026-04-05 01:40:03.589692 | orchestrator | 2026-04-05 01:40:03 | ERROR  | Unable to get ansible vault password 2026-04-05 01:40:03.589855 | orchestrator | 2026-04-05 01:40:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:40:03.589873 | orchestrator | 2026-04-05 01:40:03 | ERROR  | Dropping encrypted entries 2026-04-05 01:40:04.774867 | orchestrator | +------+--------+----------+ 2026-04-05 01:40:04.775017 | orchestrator | | ID | Name | Status | 2026-04-05 01:40:04.775037 | orchestrator | |------+--------+----------| 2026-04-05 01:40:04.775061 | orchestrator | +------+--------+----------+ 2026-04-05 01:40:05.155714 | orchestrator | + server_ping 2026-04-05 01:40:05.156421 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 01:40:05.156455 | orchestrator | ++ tr -d '\r' 2026-04-05 01:40:08.227357 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:40:08.227463 | orchestrator | + ping -c3 192.168.112.132 2026-04-05 01:40:08.235385 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-04-05 01:40:08.235453 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=5.92 ms 2026-04-05 01:40:09.233363 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.55 ms 2026-04-05 01:40:10.234134 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.42 ms 2026-04-05 01:40:10.234260 | orchestrator | 2026-04-05 01:40:10.234285 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-04-05 01:40:10.234304 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-05 01:40:10.234320 | orchestrator | rtt min/avg/max/mdev = 1.417/3.295/5.920/1.912 ms 2026-04-05 01:40:10.234339 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:40:10.234804 | orchestrator | + ping -c3 192.168.112.130 2026-04-05 01:40:10.247314 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2026-04-05 01:40:10.247430 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=8.43 ms 2026-04-05 01:40:11.243023 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.15 ms 2026-04-05 01:40:12.244427 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.83 ms 2026-04-05 01:40:12.244511 | orchestrator | 2026-04-05 01:40:12.244519 | orchestrator | --- 192.168.112.130 ping statistics --- 2026-04-05 01:40:12.244527 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:40:12.244534 | orchestrator | rtt min/avg/max/mdev = 1.834/4.137/8.433/3.039 ms 2026-04-05 01:40:12.245548 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:40:12.245621 | orchestrator | + ping -c3 192.168.112.113 2026-04-05 01:40:12.257080 | orchestrator | PING 192.168.112.113 (192.168.112.113) 56(84) bytes of data. 2026-04-05 01:40:12.257129 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=1 ttl=63 time=6.40 ms 2026-04-05 01:40:13.254669 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=2 ttl=63 time=2.01 ms 2026-04-05 01:40:14.257082 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=3 ttl=63 time=2.19 ms 2026-04-05 01:40:14.257184 | orchestrator | 2026-04-05 01:40:14.257208 | orchestrator | --- 192.168.112.113 ping statistics --- 2026-04-05 01:40:14.257288 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:40:14.257310 | orchestrator | rtt min/avg/max/mdev = 2.005/3.534/6.403/2.030 ms 2026-04-05 01:40:14.257329 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:40:14.257346 | orchestrator | + ping -c3 192.168.112.185 2026-04-05 01:40:14.268968 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-04-05 01:40:14.269064 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=6.88 ms 2026-04-05 01:40:15.266263 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.71 ms 2026-04-05 01:40:16.267848 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.22 ms 2026-04-05 01:40:16.267952 | orchestrator | 2026-04-05 01:40:16.267967 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-04-05 01:40:16.267988 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:40:16.268008 | orchestrator | rtt min/avg/max/mdev = 2.218/3.937/6.882/2.091 ms 2026-04-05 01:40:16.268782 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:40:16.268814 | orchestrator | + ping -c3 192.168.112.189 2026-04-05 01:40:16.280640 | orchestrator | PING 192.168.112.189 (192.168.112.189) 56(84) bytes of data. 2026-04-05 01:40:16.280720 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=1 ttl=63 time=8.97 ms 2026-04-05 01:40:17.275477 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=2 ttl=63 time=2.20 ms 2026-04-05 01:40:18.277212 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=3 ttl=63 time=2.10 ms 2026-04-05 01:40:18.277378 | orchestrator | 2026-04-05 01:40:18.277395 | orchestrator | --- 192.168.112.189 ping statistics --- 2026-04-05 01:40:18.277409 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:40:18.277420 | orchestrator | rtt min/avg/max/mdev = 2.103/4.425/8.969/3.213 ms 2026-04-05 01:40:18.277432 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-05 01:40:19.922264 | orchestrator | 2026-04-05 01:40:19 | ERROR  | Unable to get ansible vault password 2026-04-05 01:40:19.922376 | orchestrator | 2026-04-05 01:40:19 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:40:19.922392 | orchestrator | 2026-04-05 01:40:19 | ERROR  | Dropping encrypted entries 2026-04-05 01:40:21.654482 | orchestrator | 2026-04-05 01:40:21 | INFO  | Live migrating server ecb1f020-133a-4cd3-a5a2-4869c73071c2 2026-04-05 01:40:31.428833 | orchestrator | 2026-04-05 01:40:31 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:40:33.769989 | orchestrator | 2026-04-05 01:40:33 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:40:36.132698 | orchestrator | 2026-04-05 01:40:36 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:40:38.405766 | orchestrator | 2026-04-05 01:40:38 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:40:40.706833 | orchestrator | 2026-04-05 01:40:40 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:40:43.017986 | orchestrator | 2026-04-05 01:40:43 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:40:45.368752 | orchestrator | 2026-04-05 01:40:45 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:40:47.732617 | orchestrator | 2026-04-05 01:40:47 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:40:50.076982 | orchestrator | 2026-04-05 01:40:50 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) is still in progress 2026-04-05 01:40:52.343507 | orchestrator | 2026-04-05 01:40:52 | INFO  | Live migration of ecb1f020-133a-4cd3-a5a2-4869c73071c2 (test-3) completed with status ACTIVE 2026-04-05 01:40:52.343680 | orchestrator | 2026-04-05 01:40:52 | INFO  | Live migrating server a4a227eb-4c47-4366-b733-a94348f3f8b9 2026-04-05 01:41:04.781359 | orchestrator | 2026-04-05 01:41:04 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:41:07.247451 | orchestrator | 2026-04-05 01:41:07 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:41:09.815810 | orchestrator | 2026-04-05 01:41:09 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:41:12.213627 | orchestrator | 2026-04-05 01:41:12 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:41:14.544421 | orchestrator | 2026-04-05 01:41:14 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:41:17.033625 | orchestrator | 2026-04-05 01:41:17 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:41:19.344564 | orchestrator | 2026-04-05 01:41:19 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:41:21.698141 | orchestrator | 2026-04-05 01:41:21 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) is still in progress 2026-04-05 01:41:24.036663 | orchestrator | 2026-04-05 01:41:24 | INFO  | Live migration of a4a227eb-4c47-4366-b733-a94348f3f8b9 (test-4) completed with status ACTIVE 2026-04-05 01:41:24.036772 | orchestrator | 2026-04-05 01:41:24 | INFO  | Live migrating server 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 2026-04-05 01:41:35.877114 | orchestrator | 2026-04-05 01:41:35 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:41:38.223901 | orchestrator | 2026-04-05 01:41:38 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:41:40.603534 | orchestrator | 2026-04-05 01:41:40 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:41:42.968809 | orchestrator | 2026-04-05 01:41:42 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:41:45.309244 | orchestrator | 2026-04-05 01:41:45 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:41:47.645071 | orchestrator | 2026-04-05 01:41:47 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:41:49.900952 | orchestrator | 2026-04-05 01:41:49 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:41:52.187846 | orchestrator | 2026-04-05 01:41:52 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) is still in progress 2026-04-05 01:41:54.552813 | orchestrator | 2026-04-05 01:41:54 | INFO  | Live migration of 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 (test-1) completed with status ACTIVE 2026-04-05 01:41:54.553032 | orchestrator | 2026-04-05 01:41:54 | INFO  | Live migrating server d94395d9-a06e-4f52-bf61-5d8ecbf752b6 2026-04-05 01:42:04.869894 | orchestrator | 2026-04-05 01:42:04 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:42:07.265182 | orchestrator | 2026-04-05 01:42:07 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:42:09.672994 | orchestrator | 2026-04-05 01:42:09 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:42:12.092764 | orchestrator | 2026-04-05 01:42:12 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:42:14.488282 | orchestrator | 2026-04-05 01:42:14 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:42:16.872468 | orchestrator | 2026-04-05 01:42:16 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:42:19.169561 | orchestrator | 2026-04-05 01:42:19 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:42:21.528784 | orchestrator | 2026-04-05 01:42:21 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) is still in progress 2026-04-05 01:42:23.947004 | orchestrator | 2026-04-05 01:42:23 | INFO  | Live migration of d94395d9-a06e-4f52-bf61-5d8ecbf752b6 (test-2) completed with status ACTIVE 2026-04-05 01:42:23.947153 | orchestrator | 2026-04-05 01:42:23 | INFO  | Live migrating server 38c7f575-7855-416d-a20d-6f0a41a1c9eb 2026-04-05 01:42:34.256813 | orchestrator | 2026-04-05 01:42:34 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:42:36.658467 | orchestrator | 2026-04-05 01:42:36 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:42:39.023941 | orchestrator | 2026-04-05 01:42:39 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:42:41.439558 | orchestrator | 2026-04-05 01:42:41 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:42:43.763121 | orchestrator | 2026-04-05 01:42:43 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:42:46.187856 | orchestrator | 2026-04-05 01:42:46 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:42:48.559897 | orchestrator | 2026-04-05 01:42:48 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:42:50.910097 | orchestrator | 2026-04-05 01:42:50 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:42:53.203155 | orchestrator | 2026-04-05 01:42:53 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:42:55.543297 | orchestrator | 2026-04-05 01:42:55 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) is still in progress 2026-04-05 01:42:57.956136 | orchestrator | 2026-04-05 01:42:57 | INFO  | Live migration of 38c7f575-7855-416d-a20d-6f0a41a1c9eb (test) completed with status ACTIVE 2026-04-05 01:42:58.297982 | orchestrator | + compute_list 2026-04-05 01:42:58.298221 | orchestrator | + osism manage compute list testbed-node-3 2026-04-05 01:42:59.901046 | orchestrator | 2026-04-05 01:42:59 | ERROR  | Unable to get ansible vault password 2026-04-05 01:42:59.901184 | orchestrator | 2026-04-05 01:42:59 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:42:59.901233 | orchestrator | 2026-04-05 01:42:59 | ERROR  | Dropping encrypted entries 2026-04-05 01:43:01.129754 | orchestrator | +------+--------+----------+ 2026-04-05 01:43:01.129863 | orchestrator | | ID | Name | Status | 2026-04-05 01:43:01.129879 | orchestrator | |------+--------+----------| 2026-04-05 01:43:01.129891 | orchestrator | +------+--------+----------+ 2026-04-05 01:43:01.475728 | orchestrator | + osism manage compute list testbed-node-4 2026-04-05 01:43:03.169515 | orchestrator | 2026-04-05 01:43:03 | ERROR  | Unable to get ansible vault password 2026-04-05 01:43:03.169633 | orchestrator | 2026-04-05 01:43:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:43:03.169662 | orchestrator | 2026-04-05 01:43:03 | ERROR  | Dropping encrypted entries 2026-04-05 01:43:04.355935 | orchestrator | +------+--------+----------+ 2026-04-05 01:43:04.356028 | orchestrator | | ID | Name | Status | 2026-04-05 01:43:04.356042 | orchestrator | |------+--------+----------| 2026-04-05 01:43:04.356088 | orchestrator | +------+--------+----------+ 2026-04-05 01:43:04.699826 | orchestrator | + osism manage compute list testbed-node-5 2026-04-05 01:43:06.401822 | orchestrator | 2026-04-05 01:43:06 | ERROR  | Unable to get ansible vault password 2026-04-05 01:43:06.401914 | orchestrator | 2026-04-05 01:43:06 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:43:06.401930 | orchestrator | 2026-04-05 01:43:06 | ERROR  | Dropping encrypted entries 2026-04-05 01:43:08.114543 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:43:08.114646 | orchestrator | | ID | Name | Status | 2026-04-05 01:43:08.114669 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:43:08.114677 | orchestrator | | ecb1f020-133a-4cd3-a5a2-4869c73071c2 | test-3 | ACTIVE | 2026-04-05 01:43:08.114685 | orchestrator | | a4a227eb-4c47-4366-b733-a94348f3f8b9 | test-4 | ACTIVE | 2026-04-05 01:43:08.114693 | orchestrator | | 89c08efd-2a2f-4eb1-b55d-b6e7fa3cb679 | test-1 | ACTIVE | 2026-04-05 01:43:08.114701 | orchestrator | | d94395d9-a06e-4f52-bf61-5d8ecbf752b6 | test-2 | ACTIVE | 2026-04-05 01:43:08.114710 | orchestrator | | 38c7f575-7855-416d-a20d-6f0a41a1c9eb | test | ACTIVE | 2026-04-05 01:43:08.114718 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:43:08.441271 | orchestrator | + server_ping 2026-04-05 01:43:08.694288 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 01:43:08.694391 | orchestrator | ++ tr -d '\r' 2026-04-05 01:43:11.379571 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:43:11.379671 | orchestrator | + ping -c3 192.168.112.132 2026-04-05 01:43:11.391029 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-04-05 01:43:11.391176 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=9.62 ms 2026-04-05 01:43:12.386757 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=3.29 ms 2026-04-05 01:43:13.387035 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.95 ms 2026-04-05 01:43:13.387209 | orchestrator | 2026-04-05 01:43:13.387229 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-04-05 01:43:13.387319 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:43:13.387335 | orchestrator | rtt min/avg/max/mdev = 1.952/4.953/9.615/3.341 ms 2026-04-05 01:43:13.388364 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:43:13.388397 | orchestrator | + ping -c3 192.168.112.130 2026-04-05 01:43:13.404825 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2026-04-05 01:43:13.404930 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=11.4 ms 2026-04-05 01:43:14.398370 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=3.23 ms 2026-04-05 01:43:15.399620 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.96 ms 2026-04-05 01:43:15.399712 | orchestrator | 2026-04-05 01:43:15.399741 | orchestrator | --- 192.168.112.130 ping statistics --- 2026-04-05 01:43:15.399752 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:43:15.399761 | orchestrator | rtt min/avg/max/mdev = 1.955/5.537/11.432/4.200 ms 2026-04-05 01:43:15.399823 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:43:15.400155 | orchestrator | + ping -c3 192.168.112.113 2026-04-05 01:43:15.412125 | orchestrator | PING 192.168.112.113 (192.168.112.113) 56(84) bytes of data. 2026-04-05 01:43:15.412204 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=1 ttl=63 time=7.31 ms 2026-04-05 01:43:16.409497 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=2 ttl=63 time=2.75 ms 2026-04-05 01:43:17.411763 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=3 ttl=63 time=2.12 ms 2026-04-05 01:43:17.411862 | orchestrator | 2026-04-05 01:43:17.411878 | orchestrator | --- 192.168.112.113 ping statistics --- 2026-04-05 01:43:17.411891 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:43:17.411903 | orchestrator | rtt min/avg/max/mdev = 2.119/4.062/7.314/2.314 ms 2026-04-05 01:43:17.412352 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:43:17.412382 | orchestrator | + ping -c3 192.168.112.185 2026-04-05 01:43:17.423737 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-04-05 01:43:17.423846 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=7.14 ms 2026-04-05 01:43:18.420893 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.72 ms 2026-04-05 01:43:19.422160 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.68 ms 2026-04-05 01:43:19.422296 | orchestrator | 2026-04-05 01:43:19.422320 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-04-05 01:43:19.422331 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:43:19.422341 | orchestrator | rtt min/avg/max/mdev = 1.676/3.844/7.137/2.366 ms 2026-04-05 01:43:19.422436 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:43:19.422449 | orchestrator | + ping -c3 192.168.112.189 2026-04-05 01:43:19.434904 | orchestrator | PING 192.168.112.189 (192.168.112.189) 56(84) bytes of data. 2026-04-05 01:43:19.435013 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=1 ttl=63 time=5.01 ms 2026-04-05 01:43:20.433602 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=2 ttl=63 time=2.36 ms 2026-04-05 01:43:21.434949 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=3 ttl=63 time=2.10 ms 2026-04-05 01:43:21.435090 | orchestrator | 2026-04-05 01:43:21.435108 | orchestrator | --- 192.168.112.189 ping statistics --- 2026-04-05 01:43:21.435122 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-05 01:43:21.435134 | orchestrator | rtt min/avg/max/mdev = 2.104/3.157/5.011/1.314 ms 2026-04-05 01:43:21.553162 | orchestrator | ok: Runtime: 0:20:15.271294 2026-04-05 01:43:21.596977 | 2026-04-05 01:43:21.597165 | TASK [Run tempest] 2026-04-05 01:43:22.274983 | orchestrator | + set -e 2026-04-05 01:43:22.275355 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 01:43:22.275392 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 01:43:22.275407 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 01:43:22.275420 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 01:43:22.275434 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 01:43:22.275448 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 01:43:22.275494 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 01:43:22.275517 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:43:22.275538 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:43:22.275550 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-05 01:43:22.275570 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-05 01:43:22.275597 | orchestrator | ++ export ARA=false 2026-04-05 01:43:22.275609 | orchestrator | ++ ARA=false 2026-04-05 01:43:22.275625 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 01:43:22.275636 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 01:43:22.275647 | orchestrator | ++ export TEMPEST=true 2026-04-05 01:43:22.275661 | orchestrator | ++ TEMPEST=true 2026-04-05 01:43:22.275673 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 01:43:22.275684 | orchestrator | ++ IS_ZUUL=true 2026-04-05 01:43:22.275696 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 01:43:22.275708 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-05 01:43:22.275718 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 01:43:22.275730 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 01:43:22.275740 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 01:43:22.275751 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 01:43:22.275762 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 01:43:22.275773 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 01:43:22.275785 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 01:43:22.275795 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 01:43:22.275806 | orchestrator | + echo 2026-04-05 01:43:22.275818 | orchestrator | 2026-04-05 01:43:22.275829 | orchestrator | # Tempest 2026-04-05 01:43:22.275841 | orchestrator | 2026-04-05 01:43:22.275851 | orchestrator | + echo '# Tempest' 2026-04-05 01:43:22.275863 | orchestrator | + echo 2026-04-05 01:43:22.275874 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-05 01:43:22.275885 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-05 01:43:33.801471 | orchestrator | 2026-04-05 01:43:33 | INFO  | Prepare task for execution of tempest. 2026-04-05 01:43:33.878806 | orchestrator | 2026-04-05 01:43:33 | INFO  | Task a45334b9-3f5d-41bd-a7fa-7f913468f06b (tempest) was prepared for execution. 2026-04-05 01:43:33.878906 | orchestrator | 2026-04-05 01:43:33 | INFO  | It takes a moment until task a45334b9-3f5d-41bd-a7fa-7f913468f06b (tempest) has been started and output is visible here. 2026-04-05 01:44:59.110803 | orchestrator | 2026-04-05 01:44:59.110933 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-05 01:44:59.110951 | orchestrator | 2026-04-05 01:44:59.110994 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-05 01:44:59.111033 | orchestrator | Sunday 05 April 2026 01:43:37 +0000 (0:00:00.359) 0:00:00.359 ********** 2026-04-05 01:44:59.111054 | orchestrator | changed: [testbed-manager] 2026-04-05 01:44:59.111072 | orchestrator | 2026-04-05 01:44:59.111084 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-05 01:44:59.111095 | orchestrator | Sunday 05 April 2026 01:43:38 +0000 (0:00:01.053) 0:00:01.412 ********** 2026-04-05 01:44:59.111107 | orchestrator | changed: [testbed-manager] 2026-04-05 01:44:59.111118 | orchestrator | 2026-04-05 01:44:59.111129 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-05 01:44:59.111140 | orchestrator | Sunday 05 April 2026 01:43:39 +0000 (0:00:01.308) 0:00:02.720 ********** 2026-04-05 01:44:59.111151 | orchestrator | ok: [testbed-manager] 2026-04-05 01:44:59.111163 | orchestrator | 2026-04-05 01:44:59.111174 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-05 01:44:59.111186 | orchestrator | Sunday 05 April 2026 01:43:40 +0000 (0:00:00.450) 0:00:03.170 ********** 2026-04-05 01:44:59.111196 | orchestrator | changed: [testbed-manager] 2026-04-05 01:44:59.111208 | orchestrator | 2026-04-05 01:44:59.111218 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-05 01:44:59.111230 | orchestrator | Sunday 05 April 2026 01:44:03 +0000 (0:00:23.506) 0:00:26.676 ********** 2026-04-05 01:44:59.111270 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-05 01:44:59.111282 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-05 01:44:59.111298 | orchestrator | 2026-04-05 01:44:59.111309 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-05 01:44:59.111320 | orchestrator | Sunday 05 April 2026 01:44:12 +0000 (0:00:09.044) 0:00:35.721 ********** 2026-04-05 01:44:59.111331 | orchestrator | ok: [testbed-manager] => { 2026-04-05 01:44:59.111342 | orchestrator |  "changed": false, 2026-04-05 01:44:59.111353 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:44:59.111364 | orchestrator | } 2026-04-05 01:44:59.111375 | orchestrator | 2026-04-05 01:44:59.111386 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-05 01:44:59.111398 | orchestrator | Sunday 05 April 2026 01:44:12 +0000 (0:00:00.172) 0:00:35.893 ********** 2026-04-05 01:44:59.111408 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:44:59.111433 | orchestrator | 2026-04-05 01:44:59.111445 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-05 01:44:59.111456 | orchestrator | Sunday 05 April 2026 01:44:17 +0000 (0:00:04.095) 0:00:39.989 ********** 2026-04-05 01:44:59.111467 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:44:59.111477 | orchestrator | 2026-04-05 01:44:59.111488 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-05 01:44:59.111499 | orchestrator | Sunday 05 April 2026 01:44:19 +0000 (0:00:02.138) 0:00:42.128 ********** 2026-04-05 01:44:59.111509 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:44:59.111520 | orchestrator | 2026-04-05 01:44:59.111531 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-05 01:44:59.111542 | orchestrator | Sunday 05 April 2026 01:44:23 +0000 (0:00:04.401) 0:00:46.529 ********** 2026-04-05 01:44:59.111553 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:44:59.111564 | orchestrator | 2026-04-05 01:44:59.111574 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-05 01:44:59.111585 | orchestrator | Sunday 05 April 2026 01:44:23 +0000 (0:00:00.237) 0:00:46.767 ********** 2026-04-05 01:44:59.111596 | orchestrator | changed: [testbed-manager] 2026-04-05 01:44:59.111608 | orchestrator | 2026-04-05 01:44:59.111619 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-05 01:44:59.111630 | orchestrator | Sunday 05 April 2026 01:44:26 +0000 (0:00:02.838) 0:00:49.605 ********** 2026-04-05 01:44:59.111641 | orchestrator | changed: [testbed-manager] 2026-04-05 01:44:59.111652 | orchestrator | 2026-04-05 01:44:59.111662 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-05 01:44:59.111673 | orchestrator | Sunday 05 April 2026 01:44:36 +0000 (0:00:10.251) 0:00:59.857 ********** 2026-04-05 01:44:59.111684 | orchestrator | changed: [testbed-manager] 2026-04-05 01:44:59.111695 | orchestrator | 2026-04-05 01:44:59.111706 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-05 01:44:59.111716 | orchestrator | Sunday 05 April 2026 01:44:37 +0000 (0:00:00.821) 0:01:00.679 ********** 2026-04-05 01:44:59.111727 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:44:59.111738 | orchestrator | 2026-04-05 01:44:59.111749 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-05 01:44:59.111760 | orchestrator | Sunday 05 April 2026 01:44:39 +0000 (0:00:01.729) 0:01:02.408 ********** 2026-04-05 01:44:59.111771 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:44:59.111782 | orchestrator | 2026-04-05 01:44:59.111793 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-05 01:44:59.111803 | orchestrator | Sunday 05 April 2026 01:44:41 +0000 (0:00:01.731) 0:01:04.140 ********** 2026-04-05 01:44:59.111814 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:44:59.111825 | orchestrator | 2026-04-05 01:44:59.111836 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-05 01:44:59.111857 | orchestrator | Sunday 05 April 2026 01:44:41 +0000 (0:00:00.227) 0:01:04.368 ********** 2026-04-05 01:44:59.111868 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:44:59.111879 | orchestrator | 2026-04-05 01:44:59.111900 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-05 01:44:59.111911 | orchestrator | Sunday 05 April 2026 01:44:41 +0000 (0:00:00.419) 0:01:04.788 ********** 2026-04-05 01:44:59.111922 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:44:59.111933 | orchestrator | 2026-04-05 01:44:59.111944 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-05 01:44:59.112026 | orchestrator | Sunday 05 April 2026 01:44:46 +0000 (0:00:04.314) 0:01:09.103 ********** 2026-04-05 01:44:59.112042 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-05 01:44:59.112053 | orchestrator |  "changed": false, 2026-04-05 01:44:59.112063 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:44:59.112074 | orchestrator | } 2026-04-05 01:44:59.112085 | orchestrator | 2026-04-05 01:44:59.112097 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-05 01:44:59.112108 | orchestrator | Sunday 05 April 2026 01:44:46 +0000 (0:00:00.220) 0:01:09.323 ********** 2026-04-05 01:44:59.112119 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-05 01:44:59.112131 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-05 01:44:59.112142 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:44:59.112153 | orchestrator | 2026-04-05 01:44:59.112164 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-05 01:44:59.112175 | orchestrator | Sunday 05 April 2026 01:44:46 +0000 (0:00:00.210) 0:01:09.533 ********** 2026-04-05 01:44:59.112186 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:44:59.112197 | orchestrator | 2026-04-05 01:44:59.112208 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-05 01:44:59.112218 | orchestrator | Sunday 05 April 2026 01:44:46 +0000 (0:00:00.161) 0:01:09.695 ********** 2026-04-05 01:44:59.112229 | orchestrator | ok: [testbed-manager] 2026-04-05 01:44:59.112240 | orchestrator | 2026-04-05 01:44:59.112250 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-05 01:44:59.112261 | orchestrator | Sunday 05 April 2026 01:44:47 +0000 (0:00:00.525) 0:01:10.220 ********** 2026-04-05 01:44:59.112272 | orchestrator | changed: [testbed-manager] 2026-04-05 01:44:59.112283 | orchestrator | 2026-04-05 01:44:59.112294 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-05 01:44:59.112305 | orchestrator | Sunday 05 April 2026 01:44:48 +0000 (0:00:00.946) 0:01:11.166 ********** 2026-04-05 01:44:59.112315 | orchestrator | ok: [testbed-manager] 2026-04-05 01:44:59.112326 | orchestrator | 2026-04-05 01:44:59.112337 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-05 01:44:59.112347 | orchestrator | Sunday 05 April 2026 01:44:48 +0000 (0:00:00.482) 0:01:11.648 ********** 2026-04-05 01:44:59.112358 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:44:59.112368 | orchestrator | 2026-04-05 01:44:59.112379 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-05 01:44:59.112390 | orchestrator | Sunday 05 April 2026 01:44:49 +0000 (0:00:00.338) 0:01:11.987 ********** 2026-04-05 01:44:59.112400 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-05 01:44:59.112412 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-05 01:44:59.112422 | orchestrator | 2026-04-05 01:44:59.112434 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-05 01:44:59.112444 | orchestrator | Sunday 05 April 2026 01:44:57 +0000 (0:00:08.899) 0:01:20.886 ********** 2026-04-05 01:44:59.112455 | orchestrator | changed: [testbed-manager] 2026-04-05 01:44:59.112474 | orchestrator | 2026-04-05 01:44:59.112485 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:44:59.112497 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 01:44:59.112508 | orchestrator | 2026-04-05 01:44:59.112519 | orchestrator | 2026-04-05 01:44:59.112530 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:44:59.112541 | orchestrator | Sunday 05 April 2026 01:44:59 +0000 (0:00:01.156) 0:01:22.043 ********** 2026-04-05 01:44:59.112552 | orchestrator | =============================================================================== 2026-04-05 01:44:59.112563 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 23.51s 2026-04-05 01:44:59.112573 | orchestrator | osism.validations.tempest : Install qemu-utils package ----------------- 10.25s 2026-04-05 01:44:59.112584 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 9.04s 2026-04-05 01:44:59.112595 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 8.90s 2026-04-05 01:44:59.112612 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 4.40s 2026-04-05 01:44:59.112624 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 4.31s 2026-04-05 01:44:59.112635 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 4.10s 2026-04-05 01:44:59.112646 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.84s 2026-04-05 01:44:59.112656 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 2.14s 2026-04-05 01:44:59.112667 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.73s 2026-04-05 01:44:59.112687 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.73s 2026-04-05 01:44:59.112706 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.31s 2026-04-05 01:44:59.112725 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.16s 2026-04-05 01:44:59.112743 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.05s 2026-04-05 01:44:59.112762 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.95s 2026-04-05 01:44:59.112779 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.82s 2026-04-05 01:44:59.112797 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.53s 2026-04-05 01:44:59.112830 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.48s 2026-04-05 01:44:59.413135 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.45s 2026-04-05 01:44:59.413268 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.42s 2026-04-05 01:44:59.647108 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-05 01:44:59.650834 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-05 01:44:59.656292 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-05 01:44:59.656390 | orchestrator | 2026-04-05 01:44:59.656400 | orchestrator | ## IDENTITY (API) 2026-04-05 01:44:59.656406 | orchestrator | 2026-04-05 01:44:59.656412 | orchestrator | + echo 2026-04-05 01:44:59.656419 | orchestrator | + echo '## IDENTITY (API)' 2026-04-05 01:44:59.656426 | orchestrator | + echo 2026-04-05 01:44:59.656433 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-05 01:44:59.656443 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-05 01:44:59.658160 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-05 01:44:59.659447 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:44:59.661008 | orchestrator | + tee -a /opt/tempest/20260405-0144.log 2026-04-05 01:45:03.931202 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:45:03.931355 | orchestrator | Did you mean one of these? 2026-04-05 01:45:03.931375 | orchestrator | help 2026-04-05 01:45:03.931387 | orchestrator | init 2026-04-05 01:45:04.436865 | orchestrator | 2026-04-05 01:45:04.437037 | orchestrator | ## IMAGE (API) 2026-04-05 01:45:04.437052 | orchestrator | 2026-04-05 01:45:04.437059 | orchestrator | + echo 2026-04-05 01:45:04.437066 | orchestrator | + echo '## IMAGE (API)' 2026-04-05 01:45:04.437074 | orchestrator | + echo 2026-04-05 01:45:04.437081 | orchestrator | + _tempest tempest.api.image.v2 2026-04-05 01:45:04.437088 | orchestrator | + local regex=tempest.api.image.v2 2026-04-05 01:45:04.437111 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-04-05 01:45:04.437748 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:45:04.440152 | orchestrator | + tee -a /opt/tempest/20260405-0145.log 2026-04-05 01:45:08.594692 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:45:08.594809 | orchestrator | Did you mean one of these? 2026-04-05 01:45:08.594840 | orchestrator | help 2026-04-05 01:45:08.594864 | orchestrator | init 2026-04-05 01:45:08.936039 | orchestrator | 2026-04-05 01:45:08.936148 | orchestrator | ## NETWORK (API) 2026-04-05 01:45:08.936177 | orchestrator | 2026-04-05 01:45:08.936199 | orchestrator | + echo 2026-04-05 01:45:08.936218 | orchestrator | + echo '## NETWORK (API)' 2026-04-05 01:45:08.936238 | orchestrator | + echo 2026-04-05 01:45:08.936258 | orchestrator | + _tempest tempest.api.network 2026-04-05 01:45:08.936279 | orchestrator | + local regex=tempest.api.network 2026-04-05 01:45:08.936302 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-04-05 01:45:08.936340 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:45:08.939308 | orchestrator | + tee -a /opt/tempest/20260405-0145.log 2026-04-05 01:45:12.679708 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:45:12.679825 | orchestrator | Did you mean one of these? 2026-04-05 01:45:12.679845 | orchestrator | help 2026-04-05 01:45:12.679857 | orchestrator | init 2026-04-05 01:45:13.191289 | orchestrator | 2026-04-05 01:45:13.191372 | orchestrator | ## VOLUME (API) 2026-04-05 01:45:13.191383 | orchestrator | 2026-04-05 01:45:13.191391 | orchestrator | + echo 2026-04-05 01:45:13.191398 | orchestrator | + echo '## VOLUME (API)' 2026-04-05 01:45:13.191406 | orchestrator | + echo 2026-04-05 01:45:13.191413 | orchestrator | + _tempest tempest.api.volume 2026-04-05 01:45:13.191420 | orchestrator | + local regex=tempest.api.volume 2026-04-05 01:45:13.191941 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-04-05 01:45:13.193148 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:45:13.196744 | orchestrator | + tee -a /opt/tempest/20260405-0145.log 2026-04-05 01:45:17.412397 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:45:17.412500 | orchestrator | Did you mean one of these? 2026-04-05 01:45:17.412514 | orchestrator | help 2026-04-05 01:45:17.412525 | orchestrator | init 2026-04-05 01:45:17.837363 | orchestrator | 2026-04-05 01:45:17.837492 | orchestrator | ## COMPUTE (API) 2026-04-05 01:45:17.837513 | orchestrator | 2026-04-05 01:45:17.837524 | orchestrator | + echo 2026-04-05 01:45:17.837534 | orchestrator | + echo '## COMPUTE (API)' 2026-04-05 01:45:17.837546 | orchestrator | + echo 2026-04-05 01:45:17.837556 | orchestrator | + _tempest tempest.api.compute 2026-04-05 01:45:17.837591 | orchestrator | + local regex=tempest.api.compute 2026-04-05 01:45:17.837617 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-04-05 01:45:17.838759 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:45:17.841130 | orchestrator | + tee -a /opt/tempest/20260405-0145.log 2026-04-05 01:45:22.156095 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:45:22.156183 | orchestrator | Did you mean one of these? 2026-04-05 01:45:22.156197 | orchestrator | help 2026-04-05 01:45:22.156207 | orchestrator | init 2026-04-05 01:45:22.672360 | orchestrator | 2026-04-05 01:45:22.673229 | orchestrator | ## DNS (API) 2026-04-05 01:45:22.673274 | orchestrator | 2026-04-05 01:45:22.673291 | orchestrator | + echo 2026-04-05 01:45:22.673305 | orchestrator | + echo '## DNS (API)' 2026-04-05 01:45:22.673321 | orchestrator | + echo 2026-04-05 01:45:22.673335 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-04-05 01:45:22.673351 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-04-05 01:45:22.674365 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-04-05 01:45:22.674944 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:45:22.677866 | orchestrator | + tee -a /opt/tempest/20260405-0145.log 2026-04-05 01:45:26.725638 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:45:26.725738 | orchestrator | Did you mean one of these? 2026-04-05 01:45:26.725750 | orchestrator | help 2026-04-05 01:45:26.725756 | orchestrator | init 2026-04-05 01:45:27.304424 | orchestrator | 2026-04-05 01:45:27.304502 | orchestrator | ## OBJECT-STORE (API) 2026-04-05 01:45:27.304515 | orchestrator | 2026-04-05 01:45:27.304524 | orchestrator | + echo 2026-04-05 01:45:27.304532 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-04-05 01:45:27.304541 | orchestrator | + echo 2026-04-05 01:45:27.304550 | orchestrator | + _tempest tempest.api.object_storage 2026-04-05 01:45:27.304560 | orchestrator | + local regex=tempest.api.object_storage 2026-04-05 01:45:27.304624 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-04-05 01:45:27.306195 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:45:27.309233 | orchestrator | + tee -a /opt/tempest/20260405-0145.log 2026-04-05 01:45:31.252614 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:45:31.252724 | orchestrator | Did you mean one of these? 2026-04-05 01:45:31.252736 | orchestrator | help 2026-04-05 01:45:31.252742 | orchestrator | init 2026-04-05 01:45:31.750220 | orchestrator | ok: Runtime: 0:02:09.799881 2026-04-05 01:45:31.764236 | 2026-04-05 01:45:31.764359 | TASK [Check prometheus alert status] 2026-04-05 01:45:32.298080 | orchestrator | skipping: Conditional result was False 2026-04-05 01:45:32.301203 | 2026-04-05 01:45:32.301374 | PLAY RECAP 2026-04-05 01:45:32.301508 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-04-05 01:45:32.301577 | 2026-04-05 01:45:32.560947 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-05 01:45:32.563418 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-05 01:45:33.348994 | 2026-04-05 01:45:33.349177 | PLAY [Post output play] 2026-04-05 01:45:33.365854 | 2026-04-05 01:45:33.366064 | LOOP [stage-output : Register sources] 2026-04-05 01:45:33.437676 | 2026-04-05 01:45:33.438116 | TASK [stage-output : Check sudo] 2026-04-05 01:45:34.333188 | orchestrator | sudo: a password is required 2026-04-05 01:45:34.476302 | orchestrator | ok: Runtime: 0:00:00.009349 2026-04-05 01:45:34.490452 | 2026-04-05 01:45:34.490619 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-05 01:45:34.524955 | 2026-04-05 01:45:34.525175 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-05 01:45:34.592601 | orchestrator | ok 2026-04-05 01:45:34.602445 | 2026-04-05 01:45:34.602587 | LOOP [stage-output : Ensure target folders exist] 2026-04-05 01:45:35.080742 | orchestrator | ok: "docs" 2026-04-05 01:45:35.081141 | 2026-04-05 01:45:35.372416 | orchestrator | ok: "artifacts" 2026-04-05 01:45:35.659147 | orchestrator | ok: "logs" 2026-04-05 01:45:35.673136 | 2026-04-05 01:45:35.673281 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-05 01:45:35.704662 | 2026-04-05 01:45:35.704878 | TASK [stage-output : Make all log files readable] 2026-04-05 01:45:36.004367 | orchestrator | ok 2026-04-05 01:45:36.012351 | 2026-04-05 01:45:36.012543 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-05 01:45:36.047359 | orchestrator | skipping: Conditional result was False 2026-04-05 01:45:36.063617 | 2026-04-05 01:45:36.063768 | TASK [stage-output : Discover log files for compression] 2026-04-05 01:45:36.088713 | orchestrator | skipping: Conditional result was False 2026-04-05 01:45:36.098992 | 2026-04-05 01:45:36.099124 | LOOP [stage-output : Archive everything from logs] 2026-04-05 01:45:36.145865 | 2026-04-05 01:45:36.146088 | PLAY [Post cleanup play] 2026-04-05 01:45:36.156331 | 2026-04-05 01:45:36.156469 | TASK [Set cloud fact (Zuul deployment)] 2026-04-05 01:45:36.216771 | orchestrator | ok 2026-04-05 01:45:36.228425 | 2026-04-05 01:45:36.228543 | TASK [Set cloud fact (local deployment)] 2026-04-05 01:45:36.252883 | orchestrator | skipping: Conditional result was False 2026-04-05 01:45:36.267796 | 2026-04-05 01:45:36.267931 | TASK [Clean the cloud environment] 2026-04-05 01:45:37.023895 | orchestrator | 2026-04-05 01:45:37 - clean up servers 2026-04-05 01:45:37.882825 | orchestrator | 2026-04-05 01:45:37 - testbed-manager 2026-04-05 01:45:37.976626 | orchestrator | 2026-04-05 01:45:37 - testbed-node-2 2026-04-05 01:45:38.070048 | orchestrator | 2026-04-05 01:45:38 - testbed-node-5 2026-04-05 01:45:38.169471 | orchestrator | 2026-04-05 01:45:38 - testbed-node-1 2026-04-05 01:45:38.265659 | orchestrator | 2026-04-05 01:45:38 - testbed-node-3 2026-04-05 01:45:38.370935 | orchestrator | 2026-04-05 01:45:38 - testbed-node-4 2026-04-05 01:45:38.462811 | orchestrator | 2026-04-05 01:45:38 - testbed-node-0 2026-04-05 01:45:38.595712 | orchestrator | 2026-04-05 01:45:38 - clean up keypairs 2026-04-05 01:45:38.617294 | orchestrator | 2026-04-05 01:45:38 - testbed 2026-04-05 01:45:38.644599 | orchestrator | 2026-04-05 01:45:38 - wait for servers to be gone 2026-04-05 01:45:49.586882 | orchestrator | 2026-04-05 01:45:49 - clean up ports 2026-04-05 01:45:49.788098 | orchestrator | 2026-04-05 01:45:49 - 05ef1163-77a9-48ee-896a-fadbef341a47 2026-04-05 01:45:50.084803 | orchestrator | 2026-04-05 01:45:50 - 1698d8ae-07f8-48cb-9258-8bc4769d9719 2026-04-05 01:45:50.605502 | orchestrator | 2026-04-05 01:45:50 - 4fdf1ad3-6a80-4cef-9986-bb334a18fd51 2026-04-05 01:45:50.820409 | orchestrator | 2026-04-05 01:45:50 - 64d0a3e6-a662-4d73-85fb-f4e535ba42a8 2026-04-05 01:45:51.032986 | orchestrator | 2026-04-05 01:45:51 - 9154e648-6d79-437b-80a9-3a0b72247bbe 2026-04-05 01:45:51.242862 | orchestrator | 2026-04-05 01:45:51 - a933e9e4-6671-4727-a750-f089ba19a94d 2026-04-05 01:45:51.467770 | orchestrator | 2026-04-05 01:45:51 - dd4fe4fb-99aa-44da-abfa-788824866f46 2026-04-05 01:45:51.678683 | orchestrator | 2026-04-05 01:45:51 - clean up volumes 2026-04-05 01:45:51.831058 | orchestrator | 2026-04-05 01:45:51 - testbed-volume-3-node-base 2026-04-05 01:45:51.883057 | orchestrator | 2026-04-05 01:45:51 - testbed-volume-5-node-base 2026-04-05 01:45:51.932345 | orchestrator | 2026-04-05 01:45:51 - testbed-volume-0-node-base 2026-04-05 01:45:51.975165 | orchestrator | 2026-04-05 01:45:51 - testbed-volume-1-node-base 2026-04-05 01:45:52.018338 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-4-node-base 2026-04-05 01:45:52.060368 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-manager-base 2026-04-05 01:45:52.108008 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-2-node-base 2026-04-05 01:45:52.155421 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-8-node-5 2026-04-05 01:45:52.199293 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-1-node-4 2026-04-05 01:45:52.244116 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-0-node-3 2026-04-05 01:45:52.285779 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-5-node-5 2026-04-05 01:45:52.329984 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-6-node-3 2026-04-05 01:45:52.379020 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-3-node-3 2026-04-05 01:45:52.423895 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-7-node-4 2026-04-05 01:45:52.464355 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-4-node-4 2026-04-05 01:45:52.507392 | orchestrator | 2026-04-05 01:45:52 - testbed-volume-2-node-5 2026-04-05 01:45:52.545913 | orchestrator | 2026-04-05 01:45:52 - disconnect routers 2026-04-05 01:45:52.621420 | orchestrator | 2026-04-05 01:45:52 - testbed 2026-04-05 01:45:53.690127 | orchestrator | 2026-04-05 01:45:53 - clean up subnets 2026-04-05 01:45:53.735015 | orchestrator | 2026-04-05 01:45:53 - subnet-testbed-management 2026-04-05 01:45:53.952547 | orchestrator | 2026-04-05 01:45:53 - clean up networks 2026-04-05 01:45:54.105623 | orchestrator | 2026-04-05 01:45:54 - net-testbed-management 2026-04-05 01:45:54.409382 | orchestrator | 2026-04-05 01:45:54 - clean up security groups 2026-04-05 01:45:54.459717 | orchestrator | 2026-04-05 01:45:54 - testbed-management 2026-04-05 01:45:54.580174 | orchestrator | 2026-04-05 01:45:54 - testbed-node 2026-04-05 01:45:54.718509 | orchestrator | 2026-04-05 01:45:54 - clean up floating ips 2026-04-05 01:45:54.752168 | orchestrator | 2026-04-05 01:45:54 - 81.163.193.182 2026-04-05 01:45:55.128153 | orchestrator | 2026-04-05 01:45:55 - clean up routers 2026-04-05 01:45:55.254232 | orchestrator | 2026-04-05 01:45:55 - testbed 2026-04-05 01:45:56.820917 | orchestrator | ok: Runtime: 0:00:20.127892 2026-04-05 01:45:56.825054 | 2026-04-05 01:45:56.825212 | PLAY RECAP 2026-04-05 01:45:56.825336 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-05 01:45:56.825398 | 2026-04-05 01:45:56.973494 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-05 01:45:56.976131 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-05 01:45:57.735009 | 2026-04-05 01:45:57.735174 | PLAY [Cleanup play] 2026-04-05 01:45:57.751466 | 2026-04-05 01:45:57.751614 | TASK [Set cloud fact (Zuul deployment)] 2026-04-05 01:45:57.808570 | orchestrator | ok 2026-04-05 01:45:57.817405 | 2026-04-05 01:45:57.818491 | TASK [Set cloud fact (local deployment)] 2026-04-05 01:45:57.853038 | orchestrator | skipping: Conditional result was False 2026-04-05 01:45:57.868024 | 2026-04-05 01:45:57.868168 | TASK [Clean the cloud environment] 2026-04-05 01:45:58.968215 | orchestrator | 2026-04-05 01:45:58 - clean up servers 2026-04-05 01:45:59.544824 | orchestrator | 2026-04-05 01:45:59 - clean up keypairs 2026-04-05 01:45:59.562679 | orchestrator | 2026-04-05 01:45:59 - wait for servers to be gone 2026-04-05 01:45:59.608182 | orchestrator | 2026-04-05 01:45:59 - clean up ports 2026-04-05 01:45:59.691844 | orchestrator | 2026-04-05 01:45:59 - clean up volumes 2026-04-05 01:45:59.763273 | orchestrator | 2026-04-05 01:45:59 - disconnect routers 2026-04-05 01:45:59.793775 | orchestrator | 2026-04-05 01:45:59 - clean up subnets 2026-04-05 01:45:59.813364 | orchestrator | 2026-04-05 01:45:59 - clean up networks 2026-04-05 01:46:00.539179 | orchestrator | 2026-04-05 01:46:00 - clean up security groups 2026-04-05 01:46:00.580225 | orchestrator | 2026-04-05 01:46:00 - clean up floating ips 2026-04-05 01:46:00.605152 | orchestrator | 2026-04-05 01:46:00 - clean up routers 2026-04-05 01:46:00.915495 | orchestrator | ok: Runtime: 0:00:01.997651 2026-04-05 01:46:00.919017 | 2026-04-05 01:46:00.919191 | PLAY RECAP 2026-04-05 01:46:00.919308 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-05 01:46:00.919370 | 2026-04-05 01:46:01.059227 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-05 01:46:01.060325 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-05 01:46:01.830115 | 2026-04-05 01:46:01.830292 | PLAY [Base post-fetch] 2026-04-05 01:46:01.846674 | 2026-04-05 01:46:01.846821 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-05 01:46:01.913925 | orchestrator | skipping: Conditional result was False 2026-04-05 01:46:01.929862 | 2026-04-05 01:46:01.930137 | TASK [fetch-output : Set log path for single node] 2026-04-05 01:46:01.979576 | orchestrator | ok 2026-04-05 01:46:01.989393 | 2026-04-05 01:46:01.989532 | LOOP [fetch-output : Ensure local output dirs] 2026-04-05 01:46:02.493103 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/4eaff187072e4b038e3270d3005de3d9/work/logs" 2026-04-05 01:46:02.753507 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4eaff187072e4b038e3270d3005de3d9/work/artifacts" 2026-04-05 01:46:03.037902 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4eaff187072e4b038e3270d3005de3d9/work/docs" 2026-04-05 01:46:03.060200 | 2026-04-05 01:46:03.060360 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-05 01:46:03.984147 | orchestrator | changed: .d..t...... ./ 2026-04-05 01:46:03.984398 | orchestrator | changed: All items complete 2026-04-05 01:46:03.984457 | 2026-04-05 01:46:04.668809 | orchestrator | changed: .d..t...... ./ 2026-04-05 01:46:05.393406 | orchestrator | changed: .d..t...... ./ 2026-04-05 01:46:05.415311 | 2026-04-05 01:46:05.415442 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-05 01:46:05.455399 | orchestrator | skipping: Conditional result was False 2026-04-05 01:46:05.458363 | orchestrator | skipping: Conditional result was False 2026-04-05 01:46:05.470309 | 2026-04-05 01:46:05.470415 | PLAY RECAP 2026-04-05 01:46:05.470482 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-05 01:46:05.470517 | 2026-04-05 01:46:05.608167 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-05 01:46:05.610933 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-05 01:46:06.355131 | 2026-04-05 01:46:06.355299 | PLAY [Base post] 2026-04-05 01:46:06.370457 | 2026-04-05 01:46:06.370599 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-05 01:46:07.387754 | orchestrator | changed 2026-04-05 01:46:07.398273 | 2026-04-05 01:46:07.398397 | PLAY RECAP 2026-04-05 01:46:07.398473 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-05 01:46:07.398548 | 2026-04-05 01:46:07.521416 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-05 01:46:07.522935 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-05 01:46:08.325453 | 2026-04-05 01:46:08.325647 | PLAY [Base post-logs] 2026-04-05 01:46:08.336738 | 2026-04-05 01:46:08.336878 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-05 01:46:08.789482 | localhost | changed 2026-04-05 01:46:08.799800 | 2026-04-05 01:46:08.799998 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-05 01:46:08.837159 | localhost | ok 2026-04-05 01:46:08.841549 | 2026-04-05 01:46:08.841688 | TASK [Set zuul-log-path fact] 2026-04-05 01:46:08.860807 | localhost | ok 2026-04-05 01:46:08.873186 | 2026-04-05 01:46:08.873337 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-05 01:46:08.910905 | localhost | ok 2026-04-05 01:46:08.917392 | 2026-04-05 01:46:08.917576 | TASK [upload-logs : Create log directories] 2026-04-05 01:46:09.438087 | localhost | changed 2026-04-05 01:46:09.443244 | 2026-04-05 01:46:09.443560 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-05 01:46:09.947503 | localhost -> localhost | ok: Runtime: 0:00:00.007003 2026-04-05 01:46:09.955170 | 2026-04-05 01:46:09.955369 | TASK [upload-logs : Upload logs to log server] 2026-04-05 01:46:10.529347 | localhost | Output suppressed because no_log was given 2026-04-05 01:46:10.531673 | 2026-04-05 01:46:10.531807 | LOOP [upload-logs : Compress console log and json output] 2026-04-05 01:46:10.586194 | localhost | skipping: Conditional result was False 2026-04-05 01:46:10.591133 | localhost | skipping: Conditional result was False 2026-04-05 01:46:10.603823 | 2026-04-05 01:46:10.603937 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-05 01:46:10.664378 | localhost | skipping: Conditional result was False 2026-04-05 01:46:10.664824 | 2026-04-05 01:46:10.669165 | localhost | skipping: Conditional result was False 2026-04-05 01:46:10.684377 | 2026-04-05 01:46:10.684728 | LOOP [upload-logs : Upload console log and json output]