2026-03-16 00:00:06.833806 | Job console starting 2026-03-16 00:00:06.867634 | Updating git repos 2026-03-16 00:00:06.976989 | Cloning repos into workspace 2026-03-16 00:00:07.254973 | Restoring repo states 2026-03-16 00:00:07.292711 | Merging changes 2026-03-16 00:00:07.292734 | Checking out repos 2026-03-16 00:00:07.643058 | Preparing playbooks 2026-03-16 00:00:08.535070 | Running Ansible setup 2026-03-16 00:00:15.532329 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-16 00:00:16.472286 | 2026-03-16 00:00:16.473498 | PLAY [Base pre] 2026-03-16 00:00:16.503761 | 2026-03-16 00:00:16.503889 | TASK [Setup log path fact] 2026-03-16 00:00:16.544926 | orchestrator | ok 2026-03-16 00:00:16.585869 | 2026-03-16 00:00:16.586012 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-16 00:00:16.629474 | orchestrator | ok 2026-03-16 00:00:16.659179 | 2026-03-16 00:00:16.659299 | TASK [emit-job-header : Print job information] 2026-03-16 00:00:16.749088 | # Job Information 2026-03-16 00:00:16.749284 | Ansible Version: 2.16.14 2026-03-16 00:00:16.749322 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-16 00:00:16.749369 | Pipeline: periodic-midnight 2026-03-16 00:00:16.749392 | Executor: 521e9411259a 2026-03-16 00:00:16.749414 | Triggered by: https://github.com/osism/testbed 2026-03-16 00:00:16.749436 | Event ID: 83879fe3b40a4aa88f7386c4fb052b3c 2026-03-16 00:00:16.778679 | 2026-03-16 00:00:16.778803 | LOOP [emit-job-header : Print node information] 2026-03-16 00:00:17.229300 | orchestrator | ok: 2026-03-16 00:00:17.229671 | orchestrator | # Node Information 2026-03-16 00:00:17.229724 | orchestrator | Inventory Hostname: orchestrator 2026-03-16 00:00:17.229752 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-16 00:00:17.229776 | orchestrator | Username: zuul-testbed06 2026-03-16 00:00:17.229834 | orchestrator | Distro: Debian 12.13 2026-03-16 00:00:17.229862 | orchestrator | Provider: static-testbed 2026-03-16 00:00:17.229883 | orchestrator | Region: 2026-03-16 00:00:17.229904 | orchestrator | Label: testbed-orchestrator 2026-03-16 00:00:17.229924 | orchestrator | Product Name: OpenStack Nova 2026-03-16 00:00:17.229943 | orchestrator | Interface IP: 81.163.193.140 2026-03-16 00:00:17.262428 | 2026-03-16 00:00:17.262545 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-16 00:00:18.595032 | orchestrator -> localhost | changed 2026-03-16 00:00:18.607513 | 2026-03-16 00:00:18.607645 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-16 00:00:21.620388 | orchestrator -> localhost | changed 2026-03-16 00:00:21.634354 | 2026-03-16 00:00:21.634456 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-16 00:00:22.937698 | orchestrator -> localhost | ok 2026-03-16 00:00:22.944184 | 2026-03-16 00:00:22.944323 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-16 00:00:22.979902 | orchestrator | ok 2026-03-16 00:00:23.046224 | orchestrator | included: /var/lib/zuul/builds/8c2e2d71d28f4d479ff9ce8d3bae7f94/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-16 00:00:23.086032 | 2026-03-16 00:00:23.086130 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-16 00:00:26.467712 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-16 00:00:26.467882 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/8c2e2d71d28f4d479ff9ce8d3bae7f94/work/8c2e2d71d28f4d479ff9ce8d3bae7f94_id_rsa 2026-03-16 00:00:26.467915 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/8c2e2d71d28f4d479ff9ce8d3bae7f94/work/8c2e2d71d28f4d479ff9ce8d3bae7f94_id_rsa.pub 2026-03-16 00:00:26.467937 | orchestrator -> localhost | The key fingerprint is: 2026-03-16 00:00:26.467959 | orchestrator -> localhost | SHA256:lwfayEzaWgQP6VrSgDOdGL+KKskvBLcKpmSklbAyOEE zuul-build-sshkey 2026-03-16 00:00:26.467978 | orchestrator -> localhost | The key's randomart image is: 2026-03-16 00:00:26.468007 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-16 00:00:26.468026 | orchestrator -> localhost | |.E.= .o. | 2026-03-16 00:00:26.468044 | orchestrator -> localhost | |o =.+ .+ | 2026-03-16 00:00:26.468061 | orchestrator -> localhost | |.+ +.+ + . | 2026-03-16 00:00:26.468078 | orchestrator -> localhost | |B.+ ..+B + o | 2026-03-16 00:00:26.468094 | orchestrator -> localhost | |== ..+. S + . | 2026-03-16 00:00:26.468115 | orchestrator -> localhost | |o*... o . . | 2026-03-16 00:00:26.468132 | orchestrator -> localhost | |Xo. . | 2026-03-16 00:00:26.468147 | orchestrator -> localhost | |*o | 2026-03-16 00:00:26.468164 | orchestrator -> localhost | |o o. | 2026-03-16 00:00:26.468181 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-16 00:00:26.468226 | orchestrator -> localhost | ok: Runtime: 0:00:01.494124 2026-03-16 00:00:26.474885 | 2026-03-16 00:00:26.474971 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-16 00:00:26.524084 | orchestrator | ok 2026-03-16 00:00:26.543031 | orchestrator | included: /var/lib/zuul/builds/8c2e2d71d28f4d479ff9ce8d3bae7f94/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-16 00:00:26.572040 | 2026-03-16 00:00:26.572185 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-16 00:00:26.621739 | orchestrator | skipping: Conditional result was False 2026-03-16 00:00:26.628465 | 2026-03-16 00:00:26.628602 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-16 00:00:27.633376 | orchestrator | changed 2026-03-16 00:00:27.638511 | 2026-03-16 00:00:27.638592 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-16 00:00:27.983916 | orchestrator | ok 2026-03-16 00:00:27.991679 | 2026-03-16 00:00:27.991772 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-16 00:00:28.506405 | orchestrator | ok 2026-03-16 00:00:28.513597 | 2026-03-16 00:00:28.513686 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-16 00:00:28.971435 | orchestrator | ok 2026-03-16 00:00:28.976389 | 2026-03-16 00:00:28.976481 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-16 00:00:29.010578 | orchestrator | skipping: Conditional result was False 2026-03-16 00:00:29.017261 | 2026-03-16 00:00:29.017358 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-16 00:00:30.191713 | orchestrator -> localhost | changed 2026-03-16 00:00:30.202614 | 2026-03-16 00:00:30.202706 | TASK [add-build-sshkey : Add back temp key] 2026-03-16 00:00:31.456953 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/8c2e2d71d28f4d479ff9ce8d3bae7f94/work/8c2e2d71d28f4d479ff9ce8d3bae7f94_id_rsa (zuul-build-sshkey) 2026-03-16 00:00:31.457141 | orchestrator -> localhost | ok: Runtime: 0:00:00.039325 2026-03-16 00:00:31.462864 | 2026-03-16 00:00:31.462948 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-16 00:00:32.368049 | orchestrator | ok 2026-03-16 00:00:32.372829 | 2026-03-16 00:00:32.372904 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-16 00:00:32.397380 | orchestrator | skipping: Conditional result was False 2026-03-16 00:00:32.536475 | 2026-03-16 00:00:32.536587 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-16 00:00:33.142545 | orchestrator | ok 2026-03-16 00:00:33.164987 | 2026-03-16 00:00:33.165093 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-16 00:00:33.212554 | orchestrator | ok 2026-03-16 00:00:33.218463 | 2026-03-16 00:00:33.218547 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-16 00:00:34.002032 | orchestrator -> localhost | ok 2026-03-16 00:00:34.007991 | 2026-03-16 00:00:34.008079 | TASK [validate-host : Collect information about the host] 2026-03-16 00:00:35.541476 | orchestrator | ok 2026-03-16 00:00:35.581759 | 2026-03-16 00:00:35.581867 | TASK [validate-host : Sanitize hostname] 2026-03-16 00:00:35.750303 | orchestrator | ok 2026-03-16 00:00:35.755461 | 2026-03-16 00:00:35.755607 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-16 00:00:36.934117 | orchestrator -> localhost | changed 2026-03-16 00:00:36.939216 | 2026-03-16 00:00:36.939300 | TASK [validate-host : Collect information about zuul worker] 2026-03-16 00:00:37.567630 | orchestrator | ok 2026-03-16 00:00:37.572117 | 2026-03-16 00:00:37.572197 | TASK [validate-host : Write out all zuul information for each host] 2026-03-16 00:00:39.205505 | orchestrator -> localhost | changed 2026-03-16 00:00:39.226996 | 2026-03-16 00:00:39.227106 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-16 00:00:39.524933 | orchestrator | ok 2026-03-16 00:00:39.530385 | 2026-03-16 00:00:39.530468 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-16 00:02:06.514434 | orchestrator | changed: 2026-03-16 00:02:06.514667 | orchestrator | .d..t...... src/ 2026-03-16 00:02:06.514703 | orchestrator | .d..t...... src/github.com/ 2026-03-16 00:02:06.514728 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-16 00:02:06.514750 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-16 00:02:06.515080 | orchestrator | RedHat.yml 2026-03-16 00:02:06.538632 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-16 00:02:06.538650 | orchestrator | RedHat.yml 2026-03-16 00:02:06.538701 | orchestrator | = 1.53.0"... 2026-03-16 00:02:18.496778 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-16 00:02:18.514836 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-16 00:02:18.646856 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-16 00:02:19.369733 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-16 00:02:19.432571 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-16 00:02:19.992305 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-16 00:02:20.052988 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-16 00:02:20.595118 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-16 00:02:20.595199 | orchestrator | 2026-03-16 00:02:20.595207 | orchestrator | Providers are signed by their developers. 2026-03-16 00:02:20.595214 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-16 00:02:20.595233 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-16 00:02:20.595242 | orchestrator | 2026-03-16 00:02:20.595247 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-16 00:02:20.595253 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-16 00:02:20.595266 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-16 00:02:20.595271 | orchestrator | you run "tofu init" in the future. 2026-03-16 00:02:20.595490 | orchestrator | 2026-03-16 00:02:20.595505 | orchestrator | OpenTofu has been successfully initialized! 2026-03-16 00:02:20.595513 | orchestrator | 2026-03-16 00:02:20.595518 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-16 00:02:20.595526 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-16 00:02:20.595531 | orchestrator | should now work. 2026-03-16 00:02:20.595536 | orchestrator | 2026-03-16 00:02:20.595541 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-16 00:02:20.595546 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-16 00:02:20.595551 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-16 00:02:20.769363 | orchestrator | Created and switched to workspace "ci"! 2026-03-16 00:02:20.769420 | orchestrator | 2026-03-16 00:02:20.769427 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-16 00:02:20.769432 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-16 00:02:20.769455 | orchestrator | for this configuration. 2026-03-16 00:02:20.947105 | orchestrator | ci.auto.tfvars 2026-03-16 00:02:21.042667 | orchestrator | default_custom.tf 2026-03-16 00:02:22.870082 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-16 00:02:23.427583 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-16 00:02:23.695361 | orchestrator | 2026-03-16 00:02:23.695426 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-16 00:02:23.695434 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-16 00:02:23.695439 | orchestrator | + create 2026-03-16 00:02:23.695452 | orchestrator | <= read (data resources) 2026-03-16 00:02:23.695458 | orchestrator | 2026-03-16 00:02:23.695462 | orchestrator | OpenTofu will perform the following actions: 2026-03-16 00:02:23.695466 | orchestrator | 2026-03-16 00:02:23.695470 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-16 00:02:23.695475 | orchestrator | # (config refers to values not yet known) 2026-03-16 00:02:23.695479 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-16 00:02:23.695483 | orchestrator | + checksum = (known after apply) 2026-03-16 00:02:23.695487 | orchestrator | + created_at = (known after apply) 2026-03-16 00:02:23.695491 | orchestrator | + file = (known after apply) 2026-03-16 00:02:23.695495 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.695515 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.695519 | orchestrator | + min_disk_gb = (known after apply) 2026-03-16 00:02:23.695523 | orchestrator | + min_ram_mb = (known after apply) 2026-03-16 00:02:23.695527 | orchestrator | + most_recent = true 2026-03-16 00:02:23.695531 | orchestrator | + name = (known after apply) 2026-03-16 00:02:23.695535 | orchestrator | + protected = (known after apply) 2026-03-16 00:02:23.695538 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.695545 | orchestrator | + schema = (known after apply) 2026-03-16 00:02:23.695549 | orchestrator | + size_bytes = (known after apply) 2026-03-16 00:02:23.695553 | orchestrator | + tags = (known after apply) 2026-03-16 00:02:23.695556 | orchestrator | + updated_at = (known after apply) 2026-03-16 00:02:23.695560 | orchestrator | } 2026-03-16 00:02:23.695566 | orchestrator | 2026-03-16 00:02:23.695570 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-16 00:02:23.695574 | orchestrator | # (config refers to values not yet known) 2026-03-16 00:02:23.695578 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-16 00:02:23.695582 | orchestrator | + checksum = (known after apply) 2026-03-16 00:02:23.695586 | orchestrator | + created_at = (known after apply) 2026-03-16 00:02:23.695589 | orchestrator | + file = (known after apply) 2026-03-16 00:02:23.695593 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.695597 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.695601 | orchestrator | + min_disk_gb = (known after apply) 2026-03-16 00:02:23.695604 | orchestrator | + min_ram_mb = (known after apply) 2026-03-16 00:02:23.695608 | orchestrator | + most_recent = true 2026-03-16 00:02:23.695612 | orchestrator | + name = (known after apply) 2026-03-16 00:02:23.695616 | orchestrator | + protected = (known after apply) 2026-03-16 00:02:23.695620 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.695623 | orchestrator | + schema = (known after apply) 2026-03-16 00:02:23.695627 | orchestrator | + size_bytes = (known after apply) 2026-03-16 00:02:23.695631 | orchestrator | + tags = (known after apply) 2026-03-16 00:02:23.695634 | orchestrator | + updated_at = (known after apply) 2026-03-16 00:02:23.695638 | orchestrator | } 2026-03-16 00:02:23.695644 | orchestrator | 2026-03-16 00:02:23.695647 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-16 00:02:23.695652 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-16 00:02:23.695656 | orchestrator | + content = (known after apply) 2026-03-16 00:02:23.695660 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-16 00:02:23.695664 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-16 00:02:23.695667 | orchestrator | + content_md5 = (known after apply) 2026-03-16 00:02:23.695671 | orchestrator | + content_sha1 = (known after apply) 2026-03-16 00:02:23.695675 | orchestrator | + content_sha256 = (known after apply) 2026-03-16 00:02:23.695679 | orchestrator | + content_sha512 = (known after apply) 2026-03-16 00:02:23.695683 | orchestrator | + directory_permission = "0777" 2026-03-16 00:02:23.695686 | orchestrator | + file_permission = "0644" 2026-03-16 00:02:23.695690 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-16 00:02:23.695694 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.695698 | orchestrator | } 2026-03-16 00:02:23.695703 | orchestrator | 2026-03-16 00:02:23.695707 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-16 00:02:23.695711 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-16 00:02:23.695714 | orchestrator | + content = (known after apply) 2026-03-16 00:02:23.695718 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-16 00:02:23.695722 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-16 00:02:23.695726 | orchestrator | + content_md5 = (known after apply) 2026-03-16 00:02:23.695733 | orchestrator | + content_sha1 = (known after apply) 2026-03-16 00:02:23.695737 | orchestrator | + content_sha256 = (known after apply) 2026-03-16 00:02:23.695741 | orchestrator | + content_sha512 = (known after apply) 2026-03-16 00:02:23.695745 | orchestrator | + directory_permission = "0777" 2026-03-16 00:02:23.695748 | orchestrator | + file_permission = "0644" 2026-03-16 00:02:23.695756 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-16 00:02:23.695763 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.695767 | orchestrator | } 2026-03-16 00:02:23.695829 | orchestrator | 2026-03-16 00:02:23.695841 | orchestrator | # local_file.inventory will be created 2026-03-16 00:02:23.695845 | orchestrator | + resource "local_file" "inventory" { 2026-03-16 00:02:23.695849 | orchestrator | + content = (known after apply) 2026-03-16 00:02:23.695864 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-16 00:02:23.695868 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-16 00:02:23.695871 | orchestrator | + content_md5 = (known after apply) 2026-03-16 00:02:23.695875 | orchestrator | + content_sha1 = (known after apply) 2026-03-16 00:02:23.695879 | orchestrator | + content_sha256 = (known after apply) 2026-03-16 00:02:23.695883 | orchestrator | + content_sha512 = (known after apply) 2026-03-16 00:02:23.695887 | orchestrator | + directory_permission = "0777" 2026-03-16 00:02:23.695893 | orchestrator | + file_permission = "0644" 2026-03-16 00:02:23.695897 | orchestrator | + filename = "inventory.ci" 2026-03-16 00:02:23.695901 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.695905 | orchestrator | } 2026-03-16 00:02:23.695910 | orchestrator | 2026-03-16 00:02:23.695914 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-16 00:02:23.695918 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-16 00:02:23.695922 | orchestrator | + content = (sensitive value) 2026-03-16 00:02:23.695925 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-16 00:02:23.695929 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-16 00:02:23.695933 | orchestrator | + content_md5 = (known after apply) 2026-03-16 00:02:23.695937 | orchestrator | + content_sha1 = (known after apply) 2026-03-16 00:02:23.695940 | orchestrator | + content_sha256 = (known after apply) 2026-03-16 00:02:23.695944 | orchestrator | + content_sha512 = (known after apply) 2026-03-16 00:02:23.695948 | orchestrator | + directory_permission = "0700" 2026-03-16 00:02:23.695952 | orchestrator | + file_permission = "0600" 2026-03-16 00:02:23.695955 | orchestrator | + filename = ".id_rsa.ci" 2026-03-16 00:02:23.695959 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.695963 | orchestrator | } 2026-03-16 00:02:23.695967 | orchestrator | 2026-03-16 00:02:23.695970 | orchestrator | # null_resource.node_semaphore will be created 2026-03-16 00:02:23.695974 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-16 00:02:23.695978 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.695982 | orchestrator | } 2026-03-16 00:02:23.695987 | orchestrator | 2026-03-16 00:02:23.695991 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-16 00:02:23.695996 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-16 00:02:23.696000 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.696003 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.696007 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.696011 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.696015 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.696019 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-16 00:02:23.696022 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.696026 | orchestrator | + size = 80 2026-03-16 00:02:23.696030 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.696034 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.696038 | orchestrator | } 2026-03-16 00:02:23.696043 | orchestrator | 2026-03-16 00:02:23.696047 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-16 00:02:23.696051 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-16 00:02:23.696054 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.696058 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.696062 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.696070 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.696074 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.696078 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-16 00:02:23.696081 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.696085 | orchestrator | + size = 80 2026-03-16 00:02:23.696089 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.696093 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.696097 | orchestrator | } 2026-03-16 00:02:23.696102 | orchestrator | 2026-03-16 00:02:23.696106 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-16 00:02:23.696110 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-16 00:02:23.696113 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.696117 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.696121 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.696125 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.696128 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.696132 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-16 00:02:23.696136 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.696140 | orchestrator | + size = 80 2026-03-16 00:02:23.696144 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.696147 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.696151 | orchestrator | } 2026-03-16 00:02:23.696156 | orchestrator | 2026-03-16 00:02:23.696160 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-16 00:02:23.696164 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-16 00:02:23.696168 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.696172 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.696175 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.696179 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.696183 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.696187 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-16 00:02:23.696191 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.696194 | orchestrator | + size = 80 2026-03-16 00:02:23.696198 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.696202 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.696206 | orchestrator | } 2026-03-16 00:02:23.696636 | orchestrator | 2026-03-16 00:02:23.696668 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-16 00:02:23.696673 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-16 00:02:23.696677 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.696681 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.696685 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.696689 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.696693 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.696704 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-16 00:02:23.696708 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.696712 | orchestrator | + size = 80 2026-03-16 00:02:23.696716 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.696720 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.696724 | orchestrator | } 2026-03-16 00:02:23.696727 | orchestrator | 2026-03-16 00:02:23.696732 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-16 00:02:23.696736 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-16 00:02:23.696739 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.696743 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.696747 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.696758 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.696762 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.696766 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-16 00:02:23.696769 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.696773 | orchestrator | + size = 80 2026-03-16 00:02:23.696777 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.696781 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.696785 | orchestrator | } 2026-03-16 00:02:23.696788 | orchestrator | 2026-03-16 00:02:23.696792 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-16 00:02:23.696796 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-16 00:02:23.696800 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.696804 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.696807 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.696811 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.696815 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.696819 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-16 00:02:23.696823 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.696826 | orchestrator | + size = 80 2026-03-16 00:02:23.696830 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.696834 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.696838 | orchestrator | } 2026-03-16 00:02:23.696842 | orchestrator | 2026-03-16 00:02:23.696845 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-16 00:02:23.696851 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-16 00:02:23.696855 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.696858 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.696862 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.696866 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.696870 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-16 00:02:23.696874 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.696878 | orchestrator | + size = 20 2026-03-16 00:02:23.696882 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.696885 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.696889 | orchestrator | } 2026-03-16 00:02:23.696893 | orchestrator | 2026-03-16 00:02:23.696897 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-16 00:02:23.696901 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-16 00:02:23.696904 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.696908 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.696912 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.696916 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.696919 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-16 00:02:23.696923 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.696927 | orchestrator | + size = 20 2026-03-16 00:02:23.696931 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.696934 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.696938 | orchestrator | } 2026-03-16 00:02:23.696942 | orchestrator | 2026-03-16 00:02:23.696946 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-16 00:02:23.696950 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-16 00:02:23.696954 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.696957 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.696961 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.696965 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.696969 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-16 00:02:23.696972 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.696979 | orchestrator | + size = 20 2026-03-16 00:02:23.696983 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.696987 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.696991 | orchestrator | } 2026-03-16 00:02:23.696998 | orchestrator | 2026-03-16 00:02:23.697002 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-16 00:02:23.697006 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-16 00:02:23.697010 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.697013 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.697017 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.697021 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.697025 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-16 00:02:23.697029 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.697035 | orchestrator | + size = 20 2026-03-16 00:02:23.697040 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.697047 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.697052 | orchestrator | } 2026-03-16 00:02:23.697056 | orchestrator | 2026-03-16 00:02:23.697060 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-16 00:02:23.697063 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-16 00:02:23.697067 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.697071 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.697075 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.697078 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.697082 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-16 00:02:23.697086 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.697092 | orchestrator | + size = 20 2026-03-16 00:02:23.697096 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.697100 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.697104 | orchestrator | } 2026-03-16 00:02:23.697107 | orchestrator | 2026-03-16 00:02:23.697111 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-16 00:02:23.697115 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-16 00:02:23.697119 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.697122 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.697126 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.697130 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.697133 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-16 00:02:23.697137 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.697141 | orchestrator | + size = 20 2026-03-16 00:02:23.697145 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.697148 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.697152 | orchestrator | } 2026-03-16 00:02:23.697156 | orchestrator | 2026-03-16 00:02:23.697160 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-16 00:02:23.697163 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-16 00:02:23.697167 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.697171 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.697174 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.697178 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.697182 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-16 00:02:23.697186 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.697189 | orchestrator | + size = 20 2026-03-16 00:02:23.697193 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.697197 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.697200 | orchestrator | } 2026-03-16 00:02:23.697204 | orchestrator | 2026-03-16 00:02:23.697208 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-16 00:02:23.697212 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-16 00:02:23.697240 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.697245 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.697248 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.697252 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.697256 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-16 00:02:23.697260 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.697264 | orchestrator | + size = 20 2026-03-16 00:02:23.697267 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.697271 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.697275 | orchestrator | } 2026-03-16 00:02:23.697279 | orchestrator | 2026-03-16 00:02:23.697283 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-16 00:02:23.697286 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-16 00:02:23.697290 | orchestrator | + attachment = (known after apply) 2026-03-16 00:02:23.697294 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.697298 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.697302 | orchestrator | + metadata = (known after apply) 2026-03-16 00:02:23.697305 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-16 00:02:23.697309 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.697313 | orchestrator | + size = 20 2026-03-16 00:02:23.697317 | orchestrator | + volume_retype_policy = "never" 2026-03-16 00:02:23.697320 | orchestrator | + volume_type = "ssd" 2026-03-16 00:02:23.697324 | orchestrator | } 2026-03-16 00:02:23.697330 | orchestrator | 2026-03-16 00:02:23.697334 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-16 00:02:23.697338 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-16 00:02:23.697341 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-16 00:02:23.697348 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-16 00:02:23.697352 | orchestrator | + all_metadata = (known after apply) 2026-03-16 00:02:23.697356 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.697360 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.697363 | orchestrator | + config_drive = true 2026-03-16 00:02:23.697367 | orchestrator | + created = (known after apply) 2026-03-16 00:02:23.697371 | orchestrator | + flavor_id = (known after apply) 2026-03-16 00:02:23.697375 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-16 00:02:23.697379 | orchestrator | + force_delete = false 2026-03-16 00:02:23.697382 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-16 00:02:23.697386 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.697390 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.697393 | orchestrator | + image_name = (known after apply) 2026-03-16 00:02:23.697397 | orchestrator | + key_pair = "testbed" 2026-03-16 00:02:23.697401 | orchestrator | + name = "testbed-manager" 2026-03-16 00:02:23.697405 | orchestrator | + power_state = "active" 2026-03-16 00:02:23.697408 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.697412 | orchestrator | + security_groups = (known after apply) 2026-03-16 00:02:23.697416 | orchestrator | + stop_before_destroy = false 2026-03-16 00:02:23.697419 | orchestrator | + updated = (known after apply) 2026-03-16 00:02:23.697423 | orchestrator | + user_data = (sensitive value) 2026-03-16 00:02:23.697427 | orchestrator | 2026-03-16 00:02:23.697431 | orchestrator | + block_device { 2026-03-16 00:02:23.697438 | orchestrator | + boot_index = 0 2026-03-16 00:02:23.697442 | orchestrator | + delete_on_termination = false 2026-03-16 00:02:23.697448 | orchestrator | + destination_type = "volume" 2026-03-16 00:02:23.697452 | orchestrator | + multiattach = false 2026-03-16 00:02:23.697455 | orchestrator | + source_type = "volume" 2026-03-16 00:02:23.697459 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.697470 | orchestrator | } 2026-03-16 00:02:23.697474 | orchestrator | 2026-03-16 00:02:23.697478 | orchestrator | + network { 2026-03-16 00:02:23.697482 | orchestrator | + access_network = false 2026-03-16 00:02:23.697485 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-16 00:02:23.697489 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-16 00:02:23.697493 | orchestrator | + mac = (known after apply) 2026-03-16 00:02:23.697496 | orchestrator | + name = (known after apply) 2026-03-16 00:02:23.697500 | orchestrator | + port = (known after apply) 2026-03-16 00:02:23.697504 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.697508 | orchestrator | } 2026-03-16 00:02:23.697511 | orchestrator | } 2026-03-16 00:02:23.697772 | orchestrator | 2026-03-16 00:02:23.697786 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-16 00:02:23.697791 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-16 00:02:23.697794 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-16 00:02:23.697798 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-16 00:02:23.697802 | orchestrator | + all_metadata = (known after apply) 2026-03-16 00:02:23.697806 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.697810 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.697813 | orchestrator | + config_drive = true 2026-03-16 00:02:23.697817 | orchestrator | + created = (known after apply) 2026-03-16 00:02:23.697821 | orchestrator | + flavor_id = (known after apply) 2026-03-16 00:02:23.697824 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-16 00:02:23.697828 | orchestrator | + force_delete = false 2026-03-16 00:02:23.697832 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-16 00:02:23.697836 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.697840 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.697844 | orchestrator | + image_name = (known after apply) 2026-03-16 00:02:23.697847 | orchestrator | + key_pair = "testbed" 2026-03-16 00:02:23.697851 | orchestrator | + name = "testbed-node-0" 2026-03-16 00:02:23.697855 | orchestrator | + power_state = "active" 2026-03-16 00:02:23.697859 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.697862 | orchestrator | + security_groups = (known after apply) 2026-03-16 00:02:23.697866 | orchestrator | + stop_before_destroy = false 2026-03-16 00:02:23.697870 | orchestrator | + updated = (known after apply) 2026-03-16 00:02:23.697874 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-16 00:02:23.697878 | orchestrator | 2026-03-16 00:02:23.697882 | orchestrator | + block_device { 2026-03-16 00:02:23.697885 | orchestrator | + boot_index = 0 2026-03-16 00:02:23.697889 | orchestrator | + delete_on_termination = false 2026-03-16 00:02:23.697893 | orchestrator | + destination_type = "volume" 2026-03-16 00:02:23.697897 | orchestrator | + multiattach = false 2026-03-16 00:02:23.697900 | orchestrator | + source_type = "volume" 2026-03-16 00:02:23.697904 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.697908 | orchestrator | } 2026-03-16 00:02:23.697912 | orchestrator | 2026-03-16 00:02:23.697915 | orchestrator | + network { 2026-03-16 00:02:23.697919 | orchestrator | + access_network = false 2026-03-16 00:02:23.697923 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-16 00:02:23.697927 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-16 00:02:23.697931 | orchestrator | + mac = (known after apply) 2026-03-16 00:02:23.697935 | orchestrator | + name = (known after apply) 2026-03-16 00:02:23.697938 | orchestrator | + port = (known after apply) 2026-03-16 00:02:23.697942 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.697946 | orchestrator | } 2026-03-16 00:02:23.697950 | orchestrator | } 2026-03-16 00:02:23.697956 | orchestrator | 2026-03-16 00:02:23.697960 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-16 00:02:23.697964 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-16 00:02:23.697968 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-16 00:02:23.697979 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-16 00:02:23.697983 | orchestrator | + all_metadata = (known after apply) 2026-03-16 00:02:23.697986 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.697990 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.698010 | orchestrator | + config_drive = true 2026-03-16 00:02:23.698028 | orchestrator | + created = (known after apply) 2026-03-16 00:02:23.698032 | orchestrator | + flavor_id = (known after apply) 2026-03-16 00:02:23.698036 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-16 00:02:23.698040 | orchestrator | + force_delete = false 2026-03-16 00:02:23.698044 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-16 00:02:23.698048 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.698051 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.698055 | orchestrator | + image_name = (known after apply) 2026-03-16 00:02:23.698059 | orchestrator | + key_pair = "testbed" 2026-03-16 00:02:23.698063 | orchestrator | + name = "testbed-node-1" 2026-03-16 00:02:23.698067 | orchestrator | + power_state = "active" 2026-03-16 00:02:23.698070 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.698074 | orchestrator | + security_groups = (known after apply) 2026-03-16 00:02:23.698078 | orchestrator | + stop_before_destroy = false 2026-03-16 00:02:23.698082 | orchestrator | + updated = (known after apply) 2026-03-16 00:02:23.698086 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-16 00:02:23.698090 | orchestrator | 2026-03-16 00:02:23.698093 | orchestrator | + block_device { 2026-03-16 00:02:23.698097 | orchestrator | + boot_index = 0 2026-03-16 00:02:23.698101 | orchestrator | + delete_on_termination = false 2026-03-16 00:02:23.698105 | orchestrator | + destination_type = "volume" 2026-03-16 00:02:23.698108 | orchestrator | + multiattach = false 2026-03-16 00:02:23.698112 | orchestrator | + source_type = "volume" 2026-03-16 00:02:23.698116 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.698120 | orchestrator | } 2026-03-16 00:02:23.698124 | orchestrator | 2026-03-16 00:02:23.698128 | orchestrator | + network { 2026-03-16 00:02:23.698131 | orchestrator | + access_network = false 2026-03-16 00:02:23.698135 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-16 00:02:23.698139 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-16 00:02:23.698143 | orchestrator | + mac = (known after apply) 2026-03-16 00:02:23.698147 | orchestrator | + name = (known after apply) 2026-03-16 00:02:23.698151 | orchestrator | + port = (known after apply) 2026-03-16 00:02:23.698154 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.698158 | orchestrator | } 2026-03-16 00:02:23.698162 | orchestrator | } 2026-03-16 00:02:23.698168 | orchestrator | 2026-03-16 00:02:23.698172 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-16 00:02:23.698176 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-16 00:02:23.698179 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-16 00:02:23.698183 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-16 00:02:23.698188 | orchestrator | + all_metadata = (known after apply) 2026-03-16 00:02:23.698192 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.698199 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.698203 | orchestrator | + config_drive = true 2026-03-16 00:02:23.698207 | orchestrator | + created = (known after apply) 2026-03-16 00:02:23.698211 | orchestrator | + flavor_id = (known after apply) 2026-03-16 00:02:23.698215 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-16 00:02:23.698235 | orchestrator | + force_delete = false 2026-03-16 00:02:23.698240 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-16 00:02:23.698243 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.698247 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.698255 | orchestrator | + image_name = (known after apply) 2026-03-16 00:02:23.698259 | orchestrator | + key_pair = "testbed" 2026-03-16 00:02:23.698263 | orchestrator | + name = "testbed-node-2" 2026-03-16 00:02:23.698266 | orchestrator | + power_state = "active" 2026-03-16 00:02:23.698270 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.698274 | orchestrator | + security_groups = (known after apply) 2026-03-16 00:02:23.698278 | orchestrator | + stop_before_destroy = false 2026-03-16 00:02:23.698281 | orchestrator | + updated = (known after apply) 2026-03-16 00:02:23.698285 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-16 00:02:23.698289 | orchestrator | 2026-03-16 00:02:23.698293 | orchestrator | + block_device { 2026-03-16 00:02:23.698297 | orchestrator | + boot_index = 0 2026-03-16 00:02:23.698300 | orchestrator | + delete_on_termination = false 2026-03-16 00:02:23.698304 | orchestrator | + destination_type = "volume" 2026-03-16 00:02:23.698308 | orchestrator | + multiattach = false 2026-03-16 00:02:23.698311 | orchestrator | + source_type = "volume" 2026-03-16 00:02:23.698315 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.698319 | orchestrator | } 2026-03-16 00:02:23.698323 | orchestrator | 2026-03-16 00:02:23.698327 | orchestrator | + network { 2026-03-16 00:02:23.698331 | orchestrator | + access_network = false 2026-03-16 00:02:23.698334 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-16 00:02:23.698338 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-16 00:02:23.698342 | orchestrator | + mac = (known after apply) 2026-03-16 00:02:23.698346 | orchestrator | + name = (known after apply) 2026-03-16 00:02:23.698350 | orchestrator | + port = (known after apply) 2026-03-16 00:02:23.698353 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.698357 | orchestrator | } 2026-03-16 00:02:23.698361 | orchestrator | } 2026-03-16 00:02:23.698366 | orchestrator | 2026-03-16 00:02:23.698370 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-16 00:02:23.698374 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-16 00:02:23.698378 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-16 00:02:23.698382 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-16 00:02:23.698386 | orchestrator | + all_metadata = (known after apply) 2026-03-16 00:02:23.698389 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.698393 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.698397 | orchestrator | + config_drive = true 2026-03-16 00:02:23.698401 | orchestrator | + created = (known after apply) 2026-03-16 00:02:23.698404 | orchestrator | + flavor_id = (known after apply) 2026-03-16 00:02:23.698408 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-16 00:02:23.698412 | orchestrator | + force_delete = false 2026-03-16 00:02:23.698416 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-16 00:02:23.698420 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.698423 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.698427 | orchestrator | + image_name = (known after apply) 2026-03-16 00:02:23.698431 | orchestrator | + key_pair = "testbed" 2026-03-16 00:02:23.698435 | orchestrator | + name = "testbed-node-3" 2026-03-16 00:02:23.698438 | orchestrator | + power_state = "active" 2026-03-16 00:02:23.698442 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.698446 | orchestrator | + security_groups = (known after apply) 2026-03-16 00:02:23.698450 | orchestrator | + stop_before_destroy = false 2026-03-16 00:02:23.698453 | orchestrator | + updated = (known after apply) 2026-03-16 00:02:23.698457 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-16 00:02:23.698461 | orchestrator | 2026-03-16 00:02:23.698465 | orchestrator | + block_device { 2026-03-16 00:02:23.698471 | orchestrator | + boot_index = 0 2026-03-16 00:02:23.698475 | orchestrator | + delete_on_termination = false 2026-03-16 00:02:23.698479 | orchestrator | + destination_type = "volume" 2026-03-16 00:02:23.698485 | orchestrator | + multiattach = false 2026-03-16 00:02:23.698489 | orchestrator | + source_type = "volume" 2026-03-16 00:02:23.698493 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.698497 | orchestrator | } 2026-03-16 00:02:23.698501 | orchestrator | 2026-03-16 00:02:23.698505 | orchestrator | + network { 2026-03-16 00:02:23.698508 | orchestrator | + access_network = false 2026-03-16 00:02:23.698512 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-16 00:02:23.698516 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-16 00:02:23.698520 | orchestrator | + mac = (known after apply) 2026-03-16 00:02:23.698523 | orchestrator | + name = (known after apply) 2026-03-16 00:02:23.698527 | orchestrator | + port = (known after apply) 2026-03-16 00:02:23.698531 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.698535 | orchestrator | } 2026-03-16 00:02:23.698539 | orchestrator | } 2026-03-16 00:02:23.698544 | orchestrator | 2026-03-16 00:02:23.698548 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-16 00:02:23.698552 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-16 00:02:23.698556 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-16 00:02:23.698559 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-16 00:02:23.698563 | orchestrator | + all_metadata = (known after apply) 2026-03-16 00:02:23.698567 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.698571 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.698575 | orchestrator | + config_drive = true 2026-03-16 00:02:23.698579 | orchestrator | + created = (known after apply) 2026-03-16 00:02:23.698582 | orchestrator | + flavor_id = (known after apply) 2026-03-16 00:02:23.698586 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-16 00:02:23.698590 | orchestrator | + force_delete = false 2026-03-16 00:02:23.698594 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-16 00:02:23.698597 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.698601 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.698605 | orchestrator | + image_name = (known after apply) 2026-03-16 00:02:23.698609 | orchestrator | + key_pair = "testbed" 2026-03-16 00:02:23.698613 | orchestrator | + name = "testbed-node-4" 2026-03-16 00:02:23.698620 | orchestrator | + power_state = "active" 2026-03-16 00:02:23.698624 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.698628 | orchestrator | + security_groups = (known after apply) 2026-03-16 00:02:23.698632 | orchestrator | + stop_before_destroy = false 2026-03-16 00:02:23.698636 | orchestrator | + updated = (known after apply) 2026-03-16 00:02:23.698639 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-16 00:02:23.698643 | orchestrator | 2026-03-16 00:02:23.698647 | orchestrator | + block_device { 2026-03-16 00:02:23.698651 | orchestrator | + boot_index = 0 2026-03-16 00:02:23.698655 | orchestrator | + delete_on_termination = false 2026-03-16 00:02:23.698659 | orchestrator | + destination_type = "volume" 2026-03-16 00:02:23.698662 | orchestrator | + multiattach = false 2026-03-16 00:02:23.698666 | orchestrator | + source_type = "volume" 2026-03-16 00:02:23.698670 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.698674 | orchestrator | } 2026-03-16 00:02:23.698678 | orchestrator | 2026-03-16 00:02:23.698681 | orchestrator | + network { 2026-03-16 00:02:23.698685 | orchestrator | + access_network = false 2026-03-16 00:02:23.698689 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-16 00:02:23.698693 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-16 00:02:23.698696 | orchestrator | + mac = (known after apply) 2026-03-16 00:02:23.698700 | orchestrator | + name = (known after apply) 2026-03-16 00:02:23.698704 | orchestrator | + port = (known after apply) 2026-03-16 00:02:23.698708 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.698712 | orchestrator | } 2026-03-16 00:02:23.698716 | orchestrator | } 2026-03-16 00:02:23.698724 | orchestrator | 2026-03-16 00:02:23.698728 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-16 00:02:23.698732 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-16 00:02:23.698736 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-16 00:02:23.698740 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-16 00:02:23.698744 | orchestrator | + all_metadata = (known after apply) 2026-03-16 00:02:23.698748 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.698751 | orchestrator | + availability_zone = "nova" 2026-03-16 00:02:23.698755 | orchestrator | + config_drive = true 2026-03-16 00:02:23.698761 | orchestrator | + created = (known after apply) 2026-03-16 00:02:23.698767 | orchestrator | + flavor_id = (known after apply) 2026-03-16 00:02:23.698773 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-16 00:02:23.698777 | orchestrator | + force_delete = false 2026-03-16 00:02:23.698783 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-16 00:02:23.698787 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.698791 | orchestrator | + image_id = (known after apply) 2026-03-16 00:02:23.698795 | orchestrator | + image_name = (known after apply) 2026-03-16 00:02:23.698798 | orchestrator | + key_pair = "testbed" 2026-03-16 00:02:23.698802 | orchestrator | + name = "testbed-node-5" 2026-03-16 00:02:23.698806 | orchestrator | + power_state = "active" 2026-03-16 00:02:23.698810 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.698813 | orchestrator | + security_groups = (known after apply) 2026-03-16 00:02:23.698817 | orchestrator | + stop_before_destroy = false 2026-03-16 00:02:23.698821 | orchestrator | + updated = (known after apply) 2026-03-16 00:02:23.698825 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-16 00:02:23.698829 | orchestrator | 2026-03-16 00:02:23.698832 | orchestrator | + block_device { 2026-03-16 00:02:23.698836 | orchestrator | + boot_index = 0 2026-03-16 00:02:23.698840 | orchestrator | + delete_on_termination = false 2026-03-16 00:02:23.698844 | orchestrator | + destination_type = "volume" 2026-03-16 00:02:23.698848 | orchestrator | + multiattach = false 2026-03-16 00:02:23.698851 | orchestrator | + source_type = "volume" 2026-03-16 00:02:23.698855 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.698859 | orchestrator | } 2026-03-16 00:02:23.698863 | orchestrator | 2026-03-16 00:02:23.698867 | orchestrator | + network { 2026-03-16 00:02:23.698870 | orchestrator | + access_network = false 2026-03-16 00:02:23.698874 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-16 00:02:23.698878 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-16 00:02:23.698882 | orchestrator | + mac = (known after apply) 2026-03-16 00:02:23.698886 | orchestrator | + name = (known after apply) 2026-03-16 00:02:23.698889 | orchestrator | + port = (known after apply) 2026-03-16 00:02:23.698893 | orchestrator | + uuid = (known after apply) 2026-03-16 00:02:23.698897 | orchestrator | } 2026-03-16 00:02:23.698901 | orchestrator | } 2026-03-16 00:02:23.698905 | orchestrator | 2026-03-16 00:02:23.698909 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-16 00:02:23.698913 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-16 00:02:23.698916 | orchestrator | + fingerprint = (known after apply) 2026-03-16 00:02:23.698920 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.698924 | orchestrator | + name = "testbed" 2026-03-16 00:02:23.698928 | orchestrator | + private_key = (sensitive value) 2026-03-16 00:02:23.698932 | orchestrator | + public_key = (known after apply) 2026-03-16 00:02:23.698935 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.698939 | orchestrator | + user_id = (known after apply) 2026-03-16 00:02:23.698943 | orchestrator | } 2026-03-16 00:02:23.698947 | orchestrator | 2026-03-16 00:02:23.698951 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-16 00:02:23.698955 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-16 00:02:23.698962 | orchestrator | + device = (known after apply) 2026-03-16 00:02:23.698966 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.698970 | orchestrator | + instance_id = (known after apply) 2026-03-16 00:02:23.698974 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.698978 | orchestrator | + volume_id = (known after apply) 2026-03-16 00:02:23.698981 | orchestrator | } 2026-03-16 00:02:23.698985 | orchestrator | 2026-03-16 00:02:23.698989 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-16 00:02:23.698993 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-16 00:02:23.698997 | orchestrator | + device = (known after apply) 2026-03-16 00:02:23.699001 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699004 | orchestrator | + instance_id = (known after apply) 2026-03-16 00:02:23.699008 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699012 | orchestrator | + volume_id = (known after apply) 2026-03-16 00:02:23.699016 | orchestrator | } 2026-03-16 00:02:23.699019 | orchestrator | 2026-03-16 00:02:23.699023 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-16 00:02:23.699027 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-16 00:02:23.699031 | orchestrator | + device = (known after apply) 2026-03-16 00:02:23.699035 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699038 | orchestrator | + instance_id = (known after apply) 2026-03-16 00:02:23.699042 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699046 | orchestrator | + volume_id = (known after apply) 2026-03-16 00:02:23.699050 | orchestrator | } 2026-03-16 00:02:23.699053 | orchestrator | 2026-03-16 00:02:23.699057 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-16 00:02:23.699061 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-16 00:02:23.699065 | orchestrator | + device = (known after apply) 2026-03-16 00:02:23.699068 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699072 | orchestrator | + instance_id = (known after apply) 2026-03-16 00:02:23.699076 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699080 | orchestrator | + volume_id = (known after apply) 2026-03-16 00:02:23.699084 | orchestrator | } 2026-03-16 00:02:23.699089 | orchestrator | 2026-03-16 00:02:23.699093 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-16 00:02:23.699097 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-16 00:02:23.699101 | orchestrator | + device = (known after apply) 2026-03-16 00:02:23.699105 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699109 | orchestrator | + instance_id = (known after apply) 2026-03-16 00:02:23.699115 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699119 | orchestrator | + volume_id = (known after apply) 2026-03-16 00:02:23.699122 | orchestrator | } 2026-03-16 00:02:23.699126 | orchestrator | 2026-03-16 00:02:23.699130 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-16 00:02:23.699134 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-16 00:02:23.699138 | orchestrator | + device = (known after apply) 2026-03-16 00:02:23.699141 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699145 | orchestrator | + instance_id = (known after apply) 2026-03-16 00:02:23.699149 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699153 | orchestrator | + volume_id = (known after apply) 2026-03-16 00:02:23.699156 | orchestrator | } 2026-03-16 00:02:23.699160 | orchestrator | 2026-03-16 00:02:23.699164 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-16 00:02:23.699168 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-16 00:02:23.699172 | orchestrator | + device = (known after apply) 2026-03-16 00:02:23.699175 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699179 | orchestrator | + instance_id = (known after apply) 2026-03-16 00:02:23.699183 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699190 | orchestrator | + volume_id = (known after apply) 2026-03-16 00:02:23.699194 | orchestrator | } 2026-03-16 00:02:23.699198 | orchestrator | 2026-03-16 00:02:23.699202 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-16 00:02:23.699205 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-16 00:02:23.699209 | orchestrator | + device = (known after apply) 2026-03-16 00:02:23.699213 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699217 | orchestrator | + instance_id = (known after apply) 2026-03-16 00:02:23.699250 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699255 | orchestrator | + volume_id = (known after apply) 2026-03-16 00:02:23.699259 | orchestrator | } 2026-03-16 00:02:23.699262 | orchestrator | 2026-03-16 00:02:23.699266 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-16 00:02:23.699270 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-16 00:02:23.699274 | orchestrator | + device = (known after apply) 2026-03-16 00:02:23.699278 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699281 | orchestrator | + instance_id = (known after apply) 2026-03-16 00:02:23.699285 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699289 | orchestrator | + volume_id = (known after apply) 2026-03-16 00:02:23.699293 | orchestrator | } 2026-03-16 00:02:23.699296 | orchestrator | 2026-03-16 00:02:23.699300 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-16 00:02:23.699305 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-16 00:02:23.699309 | orchestrator | + fixed_ip = (known after apply) 2026-03-16 00:02:23.699313 | orchestrator | + floating_ip = (known after apply) 2026-03-16 00:02:23.699316 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699320 | orchestrator | + port_id = (known after apply) 2026-03-16 00:02:23.699324 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699328 | orchestrator | } 2026-03-16 00:02:23.699331 | orchestrator | 2026-03-16 00:02:23.699335 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-16 00:02:23.699339 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-16 00:02:23.699343 | orchestrator | + address = (known after apply) 2026-03-16 00:02:23.699347 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.699350 | orchestrator | + dns_domain = (known after apply) 2026-03-16 00:02:23.699354 | orchestrator | + dns_name = (known after apply) 2026-03-16 00:02:23.699358 | orchestrator | + fixed_ip = (known after apply) 2026-03-16 00:02:23.699362 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699366 | orchestrator | + pool = "public" 2026-03-16 00:02:23.699369 | orchestrator | + port_id = (known after apply) 2026-03-16 00:02:23.699373 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699377 | orchestrator | + subnet_id = (known after apply) 2026-03-16 00:02:23.699381 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.699384 | orchestrator | } 2026-03-16 00:02:23.699388 | orchestrator | 2026-03-16 00:02:23.699392 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-16 00:02:23.699396 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-16 00:02:23.699400 | orchestrator | + admin_state_up = (known after apply) 2026-03-16 00:02:23.699403 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.699407 | orchestrator | + availability_zone_hints = [ 2026-03-16 00:02:23.699411 | orchestrator | + "nova", 2026-03-16 00:02:23.699415 | orchestrator | ] 2026-03-16 00:02:23.699419 | orchestrator | + dns_domain = (known after apply) 2026-03-16 00:02:23.699422 | orchestrator | + external = (known after apply) 2026-03-16 00:02:23.699426 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699430 | orchestrator | + mtu = (known after apply) 2026-03-16 00:02:23.699434 | orchestrator | + name = "net-testbed-management" 2026-03-16 00:02:23.699438 | orchestrator | + port_security_enabled = (known after apply) 2026-03-16 00:02:23.699445 | orchestrator | + qos_policy_id = (known after apply) 2026-03-16 00:02:23.699449 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699452 | orchestrator | + shared = (known after apply) 2026-03-16 00:02:23.699456 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.699460 | orchestrator | + transparent_vlan = (known after apply) 2026-03-16 00:02:23.699464 | orchestrator | 2026-03-16 00:02:23.699468 | orchestrator | + segments (known after apply) 2026-03-16 00:02:23.699471 | orchestrator | } 2026-03-16 00:02:23.699477 | orchestrator | 2026-03-16 00:02:23.699481 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-16 00:02:23.699485 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-16 00:02:23.699489 | orchestrator | + admin_state_up = (known after apply) 2026-03-16 00:02:23.699493 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-16 00:02:23.699497 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-16 00:02:23.699503 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.699507 | orchestrator | + device_id = (known after apply) 2026-03-16 00:02:23.699511 | orchestrator | + device_owner = (known after apply) 2026-03-16 00:02:23.699514 | orchestrator | + dns_assignment = (known after apply) 2026-03-16 00:02:23.699518 | orchestrator | + dns_name = (known after apply) 2026-03-16 00:02:23.699522 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699526 | orchestrator | + mac_address = (known after apply) 2026-03-16 00:02:23.699530 | orchestrator | + network_id = (known after apply) 2026-03-16 00:02:23.699533 | orchestrator | + port_security_enabled = (known after apply) 2026-03-16 00:02:23.699537 | orchestrator | + qos_policy_id = (known after apply) 2026-03-16 00:02:23.699541 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699545 | orchestrator | + security_group_ids = (known after apply) 2026-03-16 00:02:23.699549 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.699552 | orchestrator | 2026-03-16 00:02:23.699556 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.699560 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-16 00:02:23.699564 | orchestrator | } 2026-03-16 00:02:23.699568 | orchestrator | 2026-03-16 00:02:23.699572 | orchestrator | + binding (known after apply) 2026-03-16 00:02:23.699576 | orchestrator | 2026-03-16 00:02:23.699579 | orchestrator | + fixed_ip { 2026-03-16 00:02:23.699583 | orchestrator | + ip_address = "192.168.16.5" 2026-03-16 00:02:23.699587 | orchestrator | + subnet_id = (known after apply) 2026-03-16 00:02:23.699591 | orchestrator | } 2026-03-16 00:02:23.699595 | orchestrator | } 2026-03-16 00:02:23.699599 | orchestrator | 2026-03-16 00:02:23.699603 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-16 00:02:23.699606 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-16 00:02:23.699610 | orchestrator | + admin_state_up = (known after apply) 2026-03-16 00:02:23.699614 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-16 00:02:23.699618 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-16 00:02:23.699622 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.699625 | orchestrator | + device_id = (known after apply) 2026-03-16 00:02:23.699629 | orchestrator | + device_owner = (known after apply) 2026-03-16 00:02:23.699633 | orchestrator | + dns_assignment = (known after apply) 2026-03-16 00:02:23.699637 | orchestrator | + dns_name = (known after apply) 2026-03-16 00:02:23.699641 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699644 | orchestrator | + mac_address = (known after apply) 2026-03-16 00:02:23.699648 | orchestrator | + network_id = (known after apply) 2026-03-16 00:02:23.699652 | orchestrator | + port_security_enabled = (known after apply) 2026-03-16 00:02:23.699656 | orchestrator | + qos_policy_id = (known after apply) 2026-03-16 00:02:23.699660 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699666 | orchestrator | + security_group_ids = (known after apply) 2026-03-16 00:02:23.699670 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.699674 | orchestrator | 2026-03-16 00:02:23.699678 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.699682 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-16 00:02:23.699686 | orchestrator | } 2026-03-16 00:02:23.699690 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.699693 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-16 00:02:23.699697 | orchestrator | } 2026-03-16 00:02:23.699701 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.699705 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-16 00:02:23.699709 | orchestrator | } 2026-03-16 00:02:23.699712 | orchestrator | 2026-03-16 00:02:23.699716 | orchestrator | + binding (known after apply) 2026-03-16 00:02:23.699720 | orchestrator | 2026-03-16 00:02:23.699724 | orchestrator | + fixed_ip { 2026-03-16 00:02:23.699728 | orchestrator | + ip_address = "192.168.16.10" 2026-03-16 00:02:23.699731 | orchestrator | + subnet_id = (known after apply) 2026-03-16 00:02:23.699735 | orchestrator | } 2026-03-16 00:02:23.699739 | orchestrator | } 2026-03-16 00:02:23.699743 | orchestrator | 2026-03-16 00:02:23.699747 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-16 00:02:23.699750 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-16 00:02:23.699754 | orchestrator | + admin_state_up = (known after apply) 2026-03-16 00:02:23.699758 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-16 00:02:23.699762 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-16 00:02:23.699766 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.699769 | orchestrator | + device_id = (known after apply) 2026-03-16 00:02:23.699773 | orchestrator | + device_owner = (known after apply) 2026-03-16 00:02:23.699777 | orchestrator | + dns_assignment = (known after apply) 2026-03-16 00:02:23.699781 | orchestrator | + dns_name = (known after apply) 2026-03-16 00:02:23.699785 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699788 | orchestrator | + mac_address = (known after apply) 2026-03-16 00:02:23.699792 | orchestrator | + network_id = (known after apply) 2026-03-16 00:02:23.699796 | orchestrator | + port_security_enabled = (known after apply) 2026-03-16 00:02:23.699800 | orchestrator | + qos_policy_id = (known after apply) 2026-03-16 00:02:23.699804 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699808 | orchestrator | + security_group_ids = (known after apply) 2026-03-16 00:02:23.699811 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.699815 | orchestrator | 2026-03-16 00:02:23.699819 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.699823 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-16 00:02:23.699827 | orchestrator | } 2026-03-16 00:02:23.699831 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.699834 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-16 00:02:23.699838 | orchestrator | } 2026-03-16 00:02:23.699842 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.699846 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-16 00:02:23.699850 | orchestrator | } 2026-03-16 00:02:23.699853 | orchestrator | 2026-03-16 00:02:23.699857 | orchestrator | + binding (known after apply) 2026-03-16 00:02:23.699861 | orchestrator | 2026-03-16 00:02:23.699865 | orchestrator | + fixed_ip { 2026-03-16 00:02:23.699869 | orchestrator | + ip_address = "192.168.16.11" 2026-03-16 00:02:23.699872 | orchestrator | + subnet_id = (known after apply) 2026-03-16 00:02:23.699877 | orchestrator | } 2026-03-16 00:02:23.699880 | orchestrator | } 2026-03-16 00:02:23.699886 | orchestrator | 2026-03-16 00:02:23.699890 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-16 00:02:23.699894 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-16 00:02:23.699898 | orchestrator | + admin_state_up = (known after apply) 2026-03-16 00:02:23.699902 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-16 00:02:23.699906 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-16 00:02:23.699910 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.699916 | orchestrator | + device_id = (known after apply) 2026-03-16 00:02:23.699920 | orchestrator | + device_owner = (known after apply) 2026-03-16 00:02:23.699924 | orchestrator | + dns_assignment = (known after apply) 2026-03-16 00:02:23.699928 | orchestrator | + dns_name = (known after apply) 2026-03-16 00:02:23.699936 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.699940 | orchestrator | + mac_address = (known after apply) 2026-03-16 00:02:23.699944 | orchestrator | + network_id = (known after apply) 2026-03-16 00:02:23.699948 | orchestrator | + port_security_enabled = (known after apply) 2026-03-16 00:02:23.699952 | orchestrator | + qos_policy_id = (known after apply) 2026-03-16 00:02:23.699955 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.699959 | orchestrator | + security_group_ids = (known after apply) 2026-03-16 00:02:23.699963 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.699967 | orchestrator | 2026-03-16 00:02:23.699971 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.699974 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-16 00:02:23.699978 | orchestrator | } 2026-03-16 00:02:23.699982 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.699986 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-16 00:02:23.699990 | orchestrator | } 2026-03-16 00:02:23.699994 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.699997 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-16 00:02:23.700001 | orchestrator | } 2026-03-16 00:02:23.700005 | orchestrator | 2026-03-16 00:02:23.700009 | orchestrator | + binding (known after apply) 2026-03-16 00:02:23.700013 | orchestrator | 2026-03-16 00:02:23.700016 | orchestrator | + fixed_ip { 2026-03-16 00:02:23.700020 | orchestrator | + ip_address = "192.168.16.12" 2026-03-16 00:02:23.700024 | orchestrator | + subnet_id = (known after apply) 2026-03-16 00:02:23.700028 | orchestrator | } 2026-03-16 00:02:23.700032 | orchestrator | } 2026-03-16 00:02:23.700036 | orchestrator | 2026-03-16 00:02:23.700039 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-16 00:02:23.700043 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-16 00:02:23.700047 | orchestrator | + admin_state_up = (known after apply) 2026-03-16 00:02:23.700051 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-16 00:02:23.700055 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-16 00:02:23.700059 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.700062 | orchestrator | + device_id = (known after apply) 2026-03-16 00:02:23.700066 | orchestrator | + device_owner = (known after apply) 2026-03-16 00:02:23.700070 | orchestrator | + dns_assignment = (known after apply) 2026-03-16 00:02:23.700074 | orchestrator | + dns_name = (known after apply) 2026-03-16 00:02:23.700078 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.700081 | orchestrator | + mac_address = (known after apply) 2026-03-16 00:02:23.700085 | orchestrator | + network_id = (known after apply) 2026-03-16 00:02:23.700089 | orchestrator | + port_security_enabled = (known after apply) 2026-03-16 00:02:23.700093 | orchestrator | + qos_policy_id = (known after apply) 2026-03-16 00:02:23.700097 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.700100 | orchestrator | + security_group_ids = (known after apply) 2026-03-16 00:02:23.700104 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.700108 | orchestrator | 2026-03-16 00:02:23.700112 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.700116 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-16 00:02:23.700120 | orchestrator | } 2026-03-16 00:02:23.700123 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.700127 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-16 00:02:23.700131 | orchestrator | } 2026-03-16 00:02:23.700135 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.700139 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-16 00:02:23.700143 | orchestrator | } 2026-03-16 00:02:23.700146 | orchestrator | 2026-03-16 00:02:23.700153 | orchestrator | + binding (known after apply) 2026-03-16 00:02:23.700157 | orchestrator | 2026-03-16 00:02:23.700161 | orchestrator | + fixed_ip { 2026-03-16 00:02:23.700165 | orchestrator | + ip_address = "192.168.16.13" 2026-03-16 00:02:23.700168 | orchestrator | + subnet_id = (known after apply) 2026-03-16 00:02:23.700172 | orchestrator | } 2026-03-16 00:02:23.700176 | orchestrator | } 2026-03-16 00:02:23.700180 | orchestrator | 2026-03-16 00:02:23.700184 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-16 00:02:23.700187 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-16 00:02:23.700191 | orchestrator | + admin_state_up = (known after apply) 2026-03-16 00:02:23.700195 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-16 00:02:23.700199 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-16 00:02:23.700203 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.700207 | orchestrator | + device_id = (known after apply) 2026-03-16 00:02:23.700210 | orchestrator | + device_owner = (known after apply) 2026-03-16 00:02:23.700214 | orchestrator | + dns_assignment = (known after apply) 2026-03-16 00:02:23.700218 | orchestrator | + dns_name = (known after apply) 2026-03-16 00:02:23.700298 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.700301 | orchestrator | + mac_address = (known after apply) 2026-03-16 00:02:23.700305 | orchestrator | + network_id = (known after apply) 2026-03-16 00:02:23.700309 | orchestrator | + port_security_enabled = (known after apply) 2026-03-16 00:02:23.700313 | orchestrator | + qos_policy_id = (known after apply) 2026-03-16 00:02:23.700317 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.700321 | orchestrator | + security_group_ids = (known after apply) 2026-03-16 00:02:23.700324 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.700328 | orchestrator | 2026-03-16 00:02:23.700333 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.700337 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-16 00:02:23.700340 | orchestrator | } 2026-03-16 00:02:23.700344 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.700348 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-16 00:02:23.700352 | orchestrator | } 2026-03-16 00:02:23.700356 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.700359 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-16 00:02:23.700363 | orchestrator | } 2026-03-16 00:02:23.700367 | orchestrator | 2026-03-16 00:02:23.700374 | orchestrator | + binding (known after apply) 2026-03-16 00:02:23.700378 | orchestrator | 2026-03-16 00:02:23.700382 | orchestrator | + fixed_ip { 2026-03-16 00:02:23.700386 | orchestrator | + ip_address = "192.168.16.14" 2026-03-16 00:02:23.700390 | orchestrator | + subnet_id = (known after apply) 2026-03-16 00:02:23.700394 | orchestrator | } 2026-03-16 00:02:23.700397 | orchestrator | } 2026-03-16 00:02:23.700401 | orchestrator | 2026-03-16 00:02:23.700405 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-16 00:02:23.700409 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-16 00:02:23.700412 | orchestrator | + admin_state_up = (known after apply) 2026-03-16 00:02:23.700416 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-16 00:02:23.700420 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-16 00:02:23.700424 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.700428 | orchestrator | + device_id = (known after apply) 2026-03-16 00:02:23.700431 | orchestrator | + device_owner = (known after apply) 2026-03-16 00:02:23.700435 | orchestrator | + dns_assignment = (known after apply) 2026-03-16 00:02:23.700439 | orchestrator | + dns_name = (known after apply) 2026-03-16 00:02:23.700443 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.700446 | orchestrator | + mac_address = (known after apply) 2026-03-16 00:02:23.701083 | orchestrator | + network_id = (known after apply) 2026-03-16 00:02:23.701105 | orchestrator | + port_security_enabled = (known after apply) 2026-03-16 00:02:23.701113 | orchestrator | + qos_policy_id = (known after apply) 2026-03-16 00:02:23.701124 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701128 | orchestrator | + security_group_ids = (known after apply) 2026-03-16 00:02:23.701132 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701136 | orchestrator | 2026-03-16 00:02:23.701141 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.701145 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-16 00:02:23.701149 | orchestrator | } 2026-03-16 00:02:23.701152 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.701156 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-16 00:02:23.701160 | orchestrator | } 2026-03-16 00:02:23.701164 | orchestrator | + allowed_address_pairs { 2026-03-16 00:02:23.701168 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-16 00:02:23.701171 | orchestrator | } 2026-03-16 00:02:23.701175 | orchestrator | 2026-03-16 00:02:23.701184 | orchestrator | + binding (known after apply) 2026-03-16 00:02:23.701188 | orchestrator | 2026-03-16 00:02:23.701192 | orchestrator | + fixed_ip { 2026-03-16 00:02:23.701196 | orchestrator | + ip_address = "192.168.16.15" 2026-03-16 00:02:23.701200 | orchestrator | + subnet_id = (known after apply) 2026-03-16 00:02:23.701203 | orchestrator | } 2026-03-16 00:02:23.701207 | orchestrator | } 2026-03-16 00:02:23.701211 | orchestrator | 2026-03-16 00:02:23.701215 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-16 00:02:23.701257 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-16 00:02:23.701262 | orchestrator | + force_destroy = false 2026-03-16 00:02:23.701266 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701270 | orchestrator | + port_id = (known after apply) 2026-03-16 00:02:23.701274 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701278 | orchestrator | + router_id = (known after apply) 2026-03-16 00:02:23.701282 | orchestrator | + subnet_id = (known after apply) 2026-03-16 00:02:23.701286 | orchestrator | } 2026-03-16 00:02:23.701289 | orchestrator | 2026-03-16 00:02:23.701293 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-16 00:02:23.701297 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-16 00:02:23.701301 | orchestrator | + admin_state_up = (known after apply) 2026-03-16 00:02:23.701305 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.701308 | orchestrator | + availability_zone_hints = [ 2026-03-16 00:02:23.701312 | orchestrator | + "nova", 2026-03-16 00:02:23.701316 | orchestrator | ] 2026-03-16 00:02:23.701320 | orchestrator | + distributed = (known after apply) 2026-03-16 00:02:23.701324 | orchestrator | + enable_snat = (known after apply) 2026-03-16 00:02:23.701328 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-16 00:02:23.701331 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-16 00:02:23.701335 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701339 | orchestrator | + name = "testbed" 2026-03-16 00:02:23.701343 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701347 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701351 | orchestrator | 2026-03-16 00:02:23.701354 | orchestrator | + external_fixed_ip (known after apply) 2026-03-16 00:02:23.701358 | orchestrator | } 2026-03-16 00:02:23.701362 | orchestrator | 2026-03-16 00:02:23.701366 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-16 00:02:23.701370 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-16 00:02:23.701374 | orchestrator | + description = "ssh" 2026-03-16 00:02:23.701378 | orchestrator | + direction = "ingress" 2026-03-16 00:02:23.701381 | orchestrator | + ethertype = "IPv4" 2026-03-16 00:02:23.701385 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701389 | orchestrator | + port_range_max = 22 2026-03-16 00:02:23.701393 | orchestrator | + port_range_min = 22 2026-03-16 00:02:23.701397 | orchestrator | + protocol = "tcp" 2026-03-16 00:02:23.701400 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701408 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-16 00:02:23.701411 | orchestrator | + remote_group_id = (known after apply) 2026-03-16 00:02:23.701415 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-16 00:02:23.701419 | orchestrator | + security_group_id = (known after apply) 2026-03-16 00:02:23.701423 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701427 | orchestrator | } 2026-03-16 00:02:23.701430 | orchestrator | 2026-03-16 00:02:23.701434 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-16 00:02:23.701438 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-16 00:02:23.701442 | orchestrator | + description = "wireguard" 2026-03-16 00:02:23.701446 | orchestrator | + direction = "ingress" 2026-03-16 00:02:23.701449 | orchestrator | + ethertype = "IPv4" 2026-03-16 00:02:23.701453 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701457 | orchestrator | + port_range_max = 51820 2026-03-16 00:02:23.701461 | orchestrator | + port_range_min = 51820 2026-03-16 00:02:23.701464 | orchestrator | + protocol = "udp" 2026-03-16 00:02:23.701468 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701479 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-16 00:02:23.701483 | orchestrator | + remote_group_id = (known after apply) 2026-03-16 00:02:23.701487 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-16 00:02:23.701491 | orchestrator | + security_group_id = (known after apply) 2026-03-16 00:02:23.701495 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701499 | orchestrator | } 2026-03-16 00:02:23.701503 | orchestrator | 2026-03-16 00:02:23.701506 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-16 00:02:23.701510 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-16 00:02:23.701514 | orchestrator | + direction = "ingress" 2026-03-16 00:02:23.701518 | orchestrator | + ethertype = "IPv4" 2026-03-16 00:02:23.701522 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701526 | orchestrator | + protocol = "tcp" 2026-03-16 00:02:23.701529 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701533 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-16 00:02:23.701537 | orchestrator | + remote_group_id = (known after apply) 2026-03-16 00:02:23.701541 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-16 00:02:23.701544 | orchestrator | + security_group_id = (known after apply) 2026-03-16 00:02:23.701548 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701552 | orchestrator | } 2026-03-16 00:02:23.701556 | orchestrator | 2026-03-16 00:02:23.701560 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-16 00:02:23.701564 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-16 00:02:23.701567 | orchestrator | + direction = "ingress" 2026-03-16 00:02:23.701571 | orchestrator | + ethertype = "IPv4" 2026-03-16 00:02:23.701575 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701579 | orchestrator | + protocol = "udp" 2026-03-16 00:02:23.701583 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701586 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-16 00:02:23.701590 | orchestrator | + remote_group_id = (known after apply) 2026-03-16 00:02:23.701594 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-16 00:02:23.701598 | orchestrator | + security_group_id = (known after apply) 2026-03-16 00:02:23.701602 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701605 | orchestrator | } 2026-03-16 00:02:23.701609 | orchestrator | 2026-03-16 00:02:23.701613 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-16 00:02:23.701620 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-16 00:02:23.701624 | orchestrator | + direction = "ingress" 2026-03-16 00:02:23.701628 | orchestrator | + ethertype = "IPv4" 2026-03-16 00:02:23.701632 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701635 | orchestrator | + protocol = "icmp" 2026-03-16 00:02:23.701639 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701643 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-16 00:02:23.701647 | orchestrator | + remote_group_id = (known after apply) 2026-03-16 00:02:23.701650 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-16 00:02:23.701654 | orchestrator | + security_group_id = (known after apply) 2026-03-16 00:02:23.701658 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701662 | orchestrator | } 2026-03-16 00:02:23.701666 | orchestrator | 2026-03-16 00:02:23.701669 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-16 00:02:23.701673 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-16 00:02:23.701677 | orchestrator | + direction = "ingress" 2026-03-16 00:02:23.701681 | orchestrator | + ethertype = "IPv4" 2026-03-16 00:02:23.701685 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701689 | orchestrator | + protocol = "tcp" 2026-03-16 00:02:23.701692 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701696 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-16 00:02:23.701703 | orchestrator | + remote_group_id = (known after apply) 2026-03-16 00:02:23.701707 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-16 00:02:23.701711 | orchestrator | + security_group_id = (known after apply) 2026-03-16 00:02:23.701714 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701718 | orchestrator | } 2026-03-16 00:02:23.701722 | orchestrator | 2026-03-16 00:02:23.701726 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-16 00:02:23.701730 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-16 00:02:23.701734 | orchestrator | + direction = "ingress" 2026-03-16 00:02:23.701738 | orchestrator | + ethertype = "IPv4" 2026-03-16 00:02:23.701741 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701745 | orchestrator | + protocol = "udp" 2026-03-16 00:02:23.701749 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701753 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-16 00:02:23.701756 | orchestrator | + remote_group_id = (known after apply) 2026-03-16 00:02:23.701760 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-16 00:02:23.701764 | orchestrator | + security_group_id = (known after apply) 2026-03-16 00:02:23.701768 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701771 | orchestrator | } 2026-03-16 00:02:23.701775 | orchestrator | 2026-03-16 00:02:23.701779 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-16 00:02:23.701783 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-16 00:02:23.701787 | orchestrator | + direction = "ingress" 2026-03-16 00:02:23.701793 | orchestrator | + ethertype = "IPv4" 2026-03-16 00:02:23.701797 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701801 | orchestrator | + protocol = "icmp" 2026-03-16 00:02:23.701804 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701808 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-16 00:02:23.701815 | orchestrator | + remote_group_id = (known after apply) 2026-03-16 00:02:23.701820 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-16 00:02:23.701823 | orchestrator | + security_group_id = (known after apply) 2026-03-16 00:02:23.701827 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701834 | orchestrator | } 2026-03-16 00:02:23.701838 | orchestrator | 2026-03-16 00:02:23.701842 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-16 00:02:23.701846 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-16 00:02:23.701850 | orchestrator | + description = "vrrp" 2026-03-16 00:02:23.701854 | orchestrator | + direction = "ingress" 2026-03-16 00:02:23.701857 | orchestrator | + ethertype = "IPv4" 2026-03-16 00:02:23.701861 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701865 | orchestrator | + protocol = "112" 2026-03-16 00:02:23.701869 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701872 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-16 00:02:23.701876 | orchestrator | + remote_group_id = (known after apply) 2026-03-16 00:02:23.701880 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-16 00:02:23.701884 | orchestrator | + security_group_id = (known after apply) 2026-03-16 00:02:23.701888 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701891 | orchestrator | } 2026-03-16 00:02:23.701895 | orchestrator | 2026-03-16 00:02:23.701899 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-16 00:02:23.701903 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-16 00:02:23.701907 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.701911 | orchestrator | + description = "management security group" 2026-03-16 00:02:23.701914 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701918 | orchestrator | + name = "testbed-management" 2026-03-16 00:02:23.701922 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701926 | orchestrator | + stateful = (known after apply) 2026-03-16 00:02:23.701929 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701933 | orchestrator | } 2026-03-16 00:02:23.701937 | orchestrator | 2026-03-16 00:02:23.701941 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-16 00:02:23.701945 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-16 00:02:23.701948 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.701952 | orchestrator | + description = "node security group" 2026-03-16 00:02:23.701956 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.701960 | orchestrator | + name = "testbed-node" 2026-03-16 00:02:23.701963 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.701967 | orchestrator | + stateful = (known after apply) 2026-03-16 00:02:23.701971 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.701974 | orchestrator | } 2026-03-16 00:02:23.701978 | orchestrator | 2026-03-16 00:02:23.701982 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-16 00:02:23.701986 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-16 00:02:23.701990 | orchestrator | + all_tags = (known after apply) 2026-03-16 00:02:23.701993 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-16 00:02:23.701997 | orchestrator | + dns_nameservers = [ 2026-03-16 00:02:23.702001 | orchestrator | + "8.8.8.8", 2026-03-16 00:02:23.702005 | orchestrator | + "9.9.9.9", 2026-03-16 00:02:23.702009 | orchestrator | ] 2026-03-16 00:02:23.703423 | orchestrator | + enable_dhcp = true 2026-03-16 00:02:23.703445 | orchestrator | + gateway_ip = (known after apply) 2026-03-16 00:02:23.703450 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.703454 | orchestrator | + ip_version = 4 2026-03-16 00:02:23.703459 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-16 00:02:23.703462 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-16 00:02:23.703466 | orchestrator | + name = "subnet-testbed-management" 2026-03-16 00:02:23.703471 | orchestrator | + network_id = (known after apply) 2026-03-16 00:02:23.703474 | orchestrator | + no_gateway = false 2026-03-16 00:02:23.703478 | orchestrator | + region = (known after apply) 2026-03-16 00:02:23.703482 | orchestrator | + service_types = (known after apply) 2026-03-16 00:02:23.703494 | orchestrator | + tenant_id = (known after apply) 2026-03-16 00:02:23.703499 | orchestrator | 2026-03-16 00:02:23.703503 | orchestrator | + allocation_pool { 2026-03-16 00:02:23.703506 | orchestrator | + end = "192.168.31.250" 2026-03-16 00:02:23.703510 | orchestrator | + start = "192.168.31.200" 2026-03-16 00:02:23.703514 | orchestrator | } 2026-03-16 00:02:23.703518 | orchestrator | } 2026-03-16 00:02:23.703522 | orchestrator | 2026-03-16 00:02:23.703525 | orchestrator | # terraform_data.image will be created 2026-03-16 00:02:23.703529 | orchestrator | + resource "terraform_data" "image" { 2026-03-16 00:02:23.703533 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.703537 | orchestrator | + input = "Ubuntu 24.04" 2026-03-16 00:02:23.703541 | orchestrator | + output = (known after apply) 2026-03-16 00:02:23.703544 | orchestrator | } 2026-03-16 00:02:23.703548 | orchestrator | 2026-03-16 00:02:23.703552 | orchestrator | # terraform_data.image_node will be created 2026-03-16 00:02:23.703556 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-16 00:02:23.703559 | orchestrator | + id = (known after apply) 2026-03-16 00:02:23.703563 | orchestrator | + input = "Ubuntu 24.04" 2026-03-16 00:02:23.703567 | orchestrator | + output = (known after apply) 2026-03-16 00:02:23.703571 | orchestrator | } 2026-03-16 00:02:23.703574 | orchestrator | 2026-03-16 00:02:23.703578 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-16 00:02:23.703582 | orchestrator | 2026-03-16 00:02:23.703586 | orchestrator | Changes to Outputs: 2026-03-16 00:02:23.703590 | orchestrator | + manager_address = (sensitive value) 2026-03-16 00:02:23.703594 | orchestrator | + private_key = (sensitive value) 2026-03-16 00:02:23.735291 | orchestrator | terraform_data.image_node: Creating... 2026-03-16 00:02:23.735727 | orchestrator | terraform_data.image: Creating... 2026-03-16 00:02:25.296748 | orchestrator | terraform_data.image: Creation complete after 1s [id=b59063ad-8518-c415-fb80-235f2d995abe] 2026-03-16 00:02:25.296971 | orchestrator | terraform_data.image_node: Creation complete after 1s [id=1c875651-a8a1-a053-5377-9963cab0abc4] 2026-03-16 00:02:25.314320 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-16 00:02:25.314686 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-16 00:02:25.333497 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-16 00:02:25.333538 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-16 00:02:25.333543 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-16 00:02:25.356578 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-16 00:02:25.357051 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-16 00:02:25.357143 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-16 00:02:25.357351 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-16 00:02:25.357969 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-16 00:02:25.830094 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-16 00:02:25.834120 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-16 00:02:25.840946 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-16 00:02:25.845333 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-16 00:02:25.856650 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-16 00:02:25.860565 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-16 00:02:26.496753 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=f7f671ff-97e2-4dd6-b99b-b16c3318280b] 2026-03-16 00:02:26.505357 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-16 00:02:29.030311 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=1db695b4-2be8-41cf-b2f3-0a666ad94649] 2026-03-16 00:02:29.038490 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-16 00:02:29.046935 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=dd732262-e9ae-4e48-8009-641fb05b3358] 2026-03-16 00:02:29.054775 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-16 00:02:29.072870 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=8261b325-336c-474c-bfd4-8f783607e19f] 2026-03-16 00:02:29.077358 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-16 00:02:29.087513 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=75257afc-ff3d-423c-9b8c-9aa6b4de753a] 2026-03-16 00:02:29.089152 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9] 2026-03-16 00:02:29.093892 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-16 00:02:29.096962 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-16 00:02:29.115787 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=da655a5c-29e3-4c18-87b3-c0b6111b4096] 2026-03-16 00:02:29.121621 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-16 00:02:29.184332 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=638de7de-7e30-41bf-b0e2-bce66f40688c] 2026-03-16 00:02:29.188590 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=e5bc35b8-8936-4f39-b3b2-4c8e21a1af22] 2026-03-16 00:02:29.193686 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=573bd76d-2068-40ae-bffe-bd7cc0e0b9d7] 2026-03-16 00:02:29.200388 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-16 00:02:29.200464 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-16 00:02:29.201034 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-16 00:02:29.443756 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=29a3bfe98212c0e76c183f8d7d84b32366188633] 2026-03-16 00:02:29.446910 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=d8bec90a089d53db920e4cc8625e6ffddb3db874] 2026-03-16 00:02:29.896042 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=6150f17a-ba1c-4854-a1c2-519cb1eb76a5] 2026-03-16 00:02:31.302287 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=d8773da0-f04f-4490-8508-7c405f228e89] 2026-03-16 00:02:31.309788 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-16 00:02:32.530214 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=f1441854-4f1b-4d8e-b300-e2132404da8a] 2026-03-16 00:02:32.559422 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=48d6913c-6b49-418e-9a91-33c70485f924] 2026-03-16 00:02:32.584989 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa] 2026-03-16 00:02:32.623514 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=56a6b35b-fc7c-44e9-993c-61410e3d36a4] 2026-03-16 00:02:32.652519 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=fb973c19-0af3-4cee-977b-c7b07b1fc75a] 2026-03-16 00:02:32.660420 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055] 2026-03-16 00:02:35.453979 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=9c037b39-4054-4f24-88d9-efd00ec87852] 2026-03-16 00:02:35.461694 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-16 00:02:35.462541 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-16 00:02:35.463389 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-16 00:02:35.680615 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=02dd38b0-74b9-4d78-90cc-c5f5bf8329ea] 2026-03-16 00:02:35.692872 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-16 00:02:35.697461 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-16 00:02:35.698386 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-16 00:02:35.700096 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=c9a2ac63-d091-4689-9d6e-3c9b6d081272] 2026-03-16 00:02:35.700546 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-16 00:02:35.704930 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-16 00:02:35.707193 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-16 00:02:35.712468 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-16 00:02:35.714868 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-16 00:02:35.719124 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-16 00:02:36.477952 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=47da8cbc-792f-42a6-a63e-385f9740f42b] 2026-03-16 00:02:36.489247 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-16 00:02:36.566132 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=7ef17a65-14a5-45ad-8d7d-6b2d58ba90b3] 2026-03-16 00:02:36.574743 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-16 00:02:36.715933 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=4b3e1b6d-5967-4aca-b241-a4e1190fd0ab] 2026-03-16 00:02:36.721753 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-16 00:02:36.746977 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=b88deca5-ee76-4d7c-ae0f-026fce88e02c] 2026-03-16 00:02:36.752724 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-16 00:02:36.754767 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=7ce9516c-9f17-416a-a91e-359565e6bfe4] 2026-03-16 00:02:36.762095 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-16 00:02:36.932799 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=8a29def3-4553-41b6-ab9e-d167d6a4cadd] 2026-03-16 00:02:36.943130 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-16 00:02:37.123909 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=3022115c-6a6f-474b-a65f-83aae5bc19d2] 2026-03-16 00:02:37.130607 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-16 00:02:37.150699 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=13dd379e-5159-47df-b4ff-43b5faa27186] 2026-03-16 00:02:37.388341 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=ed043753-19d9-4d36-846f-b4671734887e] 2026-03-16 00:02:37.626827 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=bb656fc8-9351-4139-b8df-448b84278644] 2026-03-16 00:02:37.743429 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=beef4e48-2d2e-43f7-8a7f-b57e0f3567a7] 2026-03-16 00:02:37.796453 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=769b0d77-6b0a-404a-9bae-2e35416beebc] 2026-03-16 00:02:37.856913 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=5f16c808-b1d2-478b-85c6-6cb75f148a7d] 2026-03-16 00:02:38.193386 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=9f7c62bf-22d0-40f8-9ea5-6a7cf3b53518] 2026-03-16 00:02:38.427160 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=c80fe71c-0692-4452-aec0-7359f5ff7ff9] 2026-03-16 00:02:38.721723 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=f73a9db5-08c4-410f-9ca1-eee18c072861] 2026-03-16 00:02:38.895346 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=18233d62-b45f-4e84-861e-d88d75f0e576] 2026-03-16 00:02:38.917290 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-16 00:02:38.928960 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-16 00:02:38.934633 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-16 00:02:38.937379 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-16 00:02:38.940968 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-16 00:02:38.947097 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-16 00:02:38.958362 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-16 00:02:41.375897 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=0c731510-ab6d-464b-af02-357dd3ae4ab7] 2026-03-16 00:02:41.386388 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-16 00:02:41.390582 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-16 00:02:41.395581 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=7de503adc88806ba5182cd4fbde093caa76cdfd1] 2026-03-16 00:02:41.398984 | orchestrator | local_file.inventory: Creating... 2026-03-16 00:02:41.411914 | orchestrator | local_file.inventory: Creation complete after 0s [id=89822f4adbda33a6eb642287127f3daf5038d4ec] 2026-03-16 00:02:42.816925 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=0c731510-ab6d-464b-af02-357dd3ae4ab7] 2026-03-16 00:02:48.939357 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-16 00:02:48.945559 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-16 00:02:48.945822 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-16 00:02:48.953068 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-16 00:02:48.957353 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-16 00:02:48.959586 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-16 00:02:58.948466 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-16 00:02:58.948598 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-16 00:02:58.948615 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-16 00:02:58.953868 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-16 00:02:58.958185 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-16 00:02:58.960533 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-16 00:03:08.957208 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-16 00:03:08.957346 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-16 00:03:08.957378 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-16 00:03:08.957389 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-16 00:03:08.958466 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-16 00:03:08.960678 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-16 00:03:09.913263 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=380c5ddb-93da-4095-820c-86f46ae0f048] 2026-03-16 00:03:10.599685 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 32s [id=bb2f37c3-fe81-4c88-8267-14d2bd98ea8d] 2026-03-16 00:03:18.957582 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-16 00:03:18.957708 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-16 00:03:18.957744 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-16 00:03:18.958740 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-16 00:03:20.147956 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=f30b62e1-b662-43d4-8a8b-6e3ef4abf0ff] 2026-03-16 00:03:20.884342 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 42s [id=6a089df1-a042-42f2-913b-0a6443812df4] 2026-03-16 00:03:28.957852 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-16 00:03:28.958939 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-16 00:03:30.650166 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 52s [id=b1cedc3a-eb50-4a65-8d99-c92c235b01ef] 2026-03-16 00:03:30.717886 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 52s [id=3e24d3db-7d44-4f08-90f2-e97377dda472] 2026-03-16 00:03:30.728399 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-16 00:03:30.730583 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=6278697147607477563] 2026-03-16 00:03:30.743362 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-16 00:03:30.766141 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-16 00:03:30.774106 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-16 00:03:30.774440 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-16 00:03:30.774452 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-16 00:03:30.779764 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-16 00:03:30.780193 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-16 00:03:30.799212 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-16 00:03:30.803150 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-16 00:03:30.816360 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-16 00:03:34.283301 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=380c5ddb-93da-4095-820c-86f46ae0f048/8261b325-336c-474c-bfd4-8f783607e19f] 2026-03-16 00:03:34.310422 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=bb2f37c3-fe81-4c88-8267-14d2bd98ea8d/e5bc35b8-8936-4f39-b3b2-4c8e21a1af22] 2026-03-16 00:03:34.339625 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=3e24d3db-7d44-4f08-90f2-e97377dda472/573bd76d-2068-40ae-bffe-bd7cc0e0b9d7] 2026-03-16 00:03:40.438092 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 9s [id=380c5ddb-93da-4095-820c-86f46ae0f048/638de7de-7e30-41bf-b0e2-bce66f40688c] 2026-03-16 00:03:40.449466 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=bb2f37c3-fe81-4c88-8267-14d2bd98ea8d/1db695b4-2be8-41cf-b2f3-0a666ad94649] 2026-03-16 00:03:40.473622 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=3e24d3db-7d44-4f08-90f2-e97377dda472/75257afc-ff3d-423c-9b8c-9aa6b4de753a] 2026-03-16 00:03:40.516817 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=bb2f37c3-fe81-4c88-8267-14d2bd98ea8d/dd732262-e9ae-4e48-8009-641fb05b3358] 2026-03-16 00:03:40.576733 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=3e24d3db-7d44-4f08-90f2-e97377dda472/da655a5c-29e3-4c18-87b3-c0b6111b4096] 2026-03-16 00:03:40.603668 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=380c5ddb-93da-4095-820c-86f46ae0f048/ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9] 2026-03-16 00:03:40.816495 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-16 00:03:50.817500 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-16 00:03:51.152008 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=535c86ab-40bd-4551-b343-232d9f2f134f] 2026-03-16 00:03:51.167591 | orchestrator | 2026-03-16 00:03:51.167663 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-16 00:03:51.167675 | orchestrator | 2026-03-16 00:03:51.167685 | orchestrator | Outputs: 2026-03-16 00:03:51.167693 | orchestrator | 2026-03-16 00:03:51.167701 | orchestrator | manager_address = 2026-03-16 00:03:51.167709 | orchestrator | private_key = 2026-03-16 00:03:51.582060 | orchestrator | ok: Runtime: 0:01:32.910589 2026-03-16 00:03:51.621400 | 2026-03-16 00:03:51.621846 | TASK [Fetch manager address] 2026-03-16 00:03:52.104041 | orchestrator | ok 2026-03-16 00:03:52.114888 | 2026-03-16 00:03:52.115029 | TASK [Set manager_host address] 2026-03-16 00:03:52.205222 | orchestrator | ok 2026-03-16 00:03:52.214244 | 2026-03-16 00:03:52.214374 | LOOP [Update ansible collections] 2026-03-16 00:03:53.224555 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-16 00:03:53.224954 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-16 00:03:53.225319 | orchestrator | Starting galaxy collection install process 2026-03-16 00:03:53.225376 | orchestrator | Process install dependency map 2026-03-16 00:03:53.225414 | orchestrator | Starting collection install process 2026-03-16 00:03:53.225450 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-03-16 00:03:53.225512 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-03-16 00:03:53.225570 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-16 00:03:53.225653 | orchestrator | ok: Item: commons Runtime: 0:00:00.660473 2026-03-16 00:03:54.123063 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-16 00:03:54.123242 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-16 00:03:54.123293 | orchestrator | Starting galaxy collection install process 2026-03-16 00:03:54.123330 | orchestrator | Process install dependency map 2026-03-16 00:03:54.123376 | orchestrator | Starting collection install process 2026-03-16 00:03:54.123419 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-03-16 00:03:54.123481 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-03-16 00:03:54.123524 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-16 00:03:54.123585 | orchestrator | ok: Item: services Runtime: 0:00:00.639040 2026-03-16 00:03:54.144848 | 2026-03-16 00:03:54.145203 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-16 00:04:04.798216 | orchestrator | ok 2026-03-16 00:04:04.807839 | 2026-03-16 00:04:04.807957 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-16 00:05:04.859629 | orchestrator | ok 2026-03-16 00:05:04.871447 | 2026-03-16 00:05:04.871634 | TASK [Fetch manager ssh hostkey] 2026-03-16 00:05:06.470533 | orchestrator | Output suppressed because no_log was given 2026-03-16 00:05:06.487215 | 2026-03-16 00:05:06.487400 | TASK [Get ssh keypair from terraform environment] 2026-03-16 00:05:07.021777 | orchestrator | ok: Runtime: 0:00:00.006129 2026-03-16 00:05:07.037176 | 2026-03-16 00:05:07.037329 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-16 00:05:07.087686 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-16 00:05:07.099212 | 2026-03-16 00:05:07.099350 | TASK [Run manager part 0] 2026-03-16 00:05:08.282579 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-16 00:05:08.348561 | orchestrator | 2026-03-16 00:05:08.348616 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-16 00:05:08.348624 | orchestrator | 2026-03-16 00:05:08.348639 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-16 00:05:10.165812 | orchestrator | ok: [testbed-manager] 2026-03-16 00:05:10.165872 | orchestrator | 2026-03-16 00:05:10.165899 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-16 00:05:10.165912 | orchestrator | 2026-03-16 00:05:10.165924 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-16 00:05:12.039619 | orchestrator | ok: [testbed-manager] 2026-03-16 00:05:12.039679 | orchestrator | 2026-03-16 00:05:12.039693 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-16 00:05:12.716003 | orchestrator | ok: [testbed-manager] 2026-03-16 00:05:12.716078 | orchestrator | 2026-03-16 00:05:12.716091 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-16 00:05:12.773697 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:05:12.773763 | orchestrator | 2026-03-16 00:05:12.773774 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-16 00:05:12.804492 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:05:12.804550 | orchestrator | 2026-03-16 00:05:12.804562 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-16 00:05:12.837677 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:05:12.837761 | orchestrator | 2026-03-16 00:05:12.837774 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-16 00:05:12.866876 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:05:12.866945 | orchestrator | 2026-03-16 00:05:12.866954 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-16 00:05:12.897316 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:05:12.897375 | orchestrator | 2026-03-16 00:05:12.897389 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-16 00:05:12.926946 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:05:12.927052 | orchestrator | 2026-03-16 00:05:12.927068 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-16 00:05:12.956993 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:05:12.957040 | orchestrator | 2026-03-16 00:05:12.957051 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-16 00:05:13.657042 | orchestrator | changed: [testbed-manager] 2026-03-16 00:05:13.657101 | orchestrator | 2026-03-16 00:05:13.657112 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-16 00:07:57.496028 | orchestrator | changed: [testbed-manager] 2026-03-16 00:07:57.496093 | orchestrator | 2026-03-16 00:07:57.496105 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-16 00:09:31.487157 | orchestrator | changed: [testbed-manager] 2026-03-16 00:09:31.487252 | orchestrator | 2026-03-16 00:09:31.487267 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-16 00:09:51.771060 | orchestrator | changed: [testbed-manager] 2026-03-16 00:09:51.771156 | orchestrator | 2026-03-16 00:09:51.771172 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-16 00:09:59.686768 | orchestrator | changed: [testbed-manager] 2026-03-16 00:09:59.686817 | orchestrator | 2026-03-16 00:09:59.686825 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-16 00:09:59.734742 | orchestrator | ok: [testbed-manager] 2026-03-16 00:09:59.734828 | orchestrator | 2026-03-16 00:09:59.734835 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-16 00:10:00.563439 | orchestrator | ok: [testbed-manager] 2026-03-16 00:10:00.563494 | orchestrator | 2026-03-16 00:10:00.563508 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-16 00:10:01.288665 | orchestrator | changed: [testbed-manager] 2026-03-16 00:10:01.288725 | orchestrator | 2026-03-16 00:10:01.288737 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-16 00:10:08.571057 | orchestrator | changed: [testbed-manager] 2026-03-16 00:10:08.571113 | orchestrator | 2026-03-16 00:10:08.571133 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-16 00:10:14.243267 | orchestrator | changed: [testbed-manager] 2026-03-16 00:10:14.243317 | orchestrator | 2026-03-16 00:10:14.243327 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-16 00:10:17.028630 | orchestrator | changed: [testbed-manager] 2026-03-16 00:10:17.028673 | orchestrator | 2026-03-16 00:10:17.028681 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-16 00:10:18.825187 | orchestrator | changed: [testbed-manager] 2026-03-16 00:10:18.825288 | orchestrator | 2026-03-16 00:10:18.825304 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-16 00:10:20.103479 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-16 00:10:20.103572 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-16 00:10:20.103587 | orchestrator | 2026-03-16 00:10:20.103600 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-16 00:10:20.190299 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-16 00:10:20.190348 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-16 00:10:20.190355 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-16 00:10:20.190360 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-16 00:10:23.495185 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-16 00:10:23.495426 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-16 00:10:23.495443 | orchestrator | 2026-03-16 00:10:23.495456 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-16 00:10:24.072073 | orchestrator | changed: [testbed-manager] 2026-03-16 00:10:24.072196 | orchestrator | 2026-03-16 00:10:24.072242 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-16 00:12:45.024996 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-16 00:12:45.025041 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-16 00:12:45.025049 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-16 00:12:45.025055 | orchestrator | 2026-03-16 00:12:45.025061 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-16 00:12:47.507798 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-16 00:12:47.507906 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-16 00:12:47.507925 | orchestrator | 2026-03-16 00:12:47.507937 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-16 00:12:47.507949 | orchestrator | 2026-03-16 00:12:47.507960 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-16 00:12:49.021282 | orchestrator | ok: [testbed-manager] 2026-03-16 00:12:49.021363 | orchestrator | 2026-03-16 00:12:49.021380 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-16 00:12:49.071382 | orchestrator | ok: [testbed-manager] 2026-03-16 00:12:49.071450 | orchestrator | 2026-03-16 00:12:49.071461 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-16 00:12:49.141491 | orchestrator | ok: [testbed-manager] 2026-03-16 00:12:49.141561 | orchestrator | 2026-03-16 00:12:49.141566 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-16 00:12:49.969605 | orchestrator | changed: [testbed-manager] 2026-03-16 00:12:49.970484 | orchestrator | 2026-03-16 00:12:49.970559 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-16 00:12:50.770980 | orchestrator | changed: [testbed-manager] 2026-03-16 00:12:50.771076 | orchestrator | 2026-03-16 00:12:50.771092 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-16 00:12:52.238868 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-16 00:12:52.238954 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-16 00:12:52.238968 | orchestrator | 2026-03-16 00:12:52.238995 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-16 00:12:53.685293 | orchestrator | changed: [testbed-manager] 2026-03-16 00:12:53.685380 | orchestrator | 2026-03-16 00:12:53.685396 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-16 00:12:55.478376 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-16 00:12:55.478463 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-16 00:12:55.478475 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-16 00:12:55.478485 | orchestrator | 2026-03-16 00:12:55.478528 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-16 00:12:55.536332 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:12:55.536428 | orchestrator | 2026-03-16 00:12:55.536447 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-16 00:12:55.608422 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:12:55.608571 | orchestrator | 2026-03-16 00:12:55.608592 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-16 00:12:56.177806 | orchestrator | changed: [testbed-manager] 2026-03-16 00:12:56.177902 | orchestrator | 2026-03-16 00:12:56.177928 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-16 00:12:56.247188 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:12:56.247228 | orchestrator | 2026-03-16 00:12:56.247341 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-16 00:12:57.018777 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-16 00:12:57.018992 | orchestrator | changed: [testbed-manager] 2026-03-16 00:12:57.019027 | orchestrator | 2026-03-16 00:12:57.019089 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-16 00:12:57.055746 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:12:57.055917 | orchestrator | 2026-03-16 00:12:57.055977 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-16 00:12:57.090223 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:12:57.090256 | orchestrator | 2026-03-16 00:12:57.090261 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-16 00:12:57.115108 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:12:57.115141 | orchestrator | 2026-03-16 00:12:57.115148 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-16 00:12:57.181242 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:12:57.181277 | orchestrator | 2026-03-16 00:12:57.181282 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-16 00:12:57.825194 | orchestrator | ok: [testbed-manager] 2026-03-16 00:12:57.825229 | orchestrator | 2026-03-16 00:12:57.825235 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-16 00:12:57.825239 | orchestrator | 2026-03-16 00:12:57.825244 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-16 00:12:59.398431 | orchestrator | ok: [testbed-manager] 2026-03-16 00:12:59.398507 | orchestrator | 2026-03-16 00:12:59.398515 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-16 00:13:00.363999 | orchestrator | changed: [testbed-manager] 2026-03-16 00:13:00.364067 | orchestrator | 2026-03-16 00:13:00.364078 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:13:00.364084 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-16 00:13:00.364089 | orchestrator | 2026-03-16 00:13:00.915801 | orchestrator | ok: Runtime: 0:07:53.056660 2026-03-16 00:13:00.931381 | 2026-03-16 00:13:00.931513 | TASK [Point out that the log in on the manager is now possible] 2026-03-16 00:13:00.978104 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-16 00:13:00.988043 | 2026-03-16 00:13:00.988190 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-16 00:13:01.038800 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-16 00:13:01.047367 | 2026-03-16 00:13:01.047503 | TASK [Run manager part 1 + 2] 2026-03-16 00:13:01.944308 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-16 00:13:02.003459 | orchestrator | 2026-03-16 00:13:02.003545 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-16 00:13:02.003554 | orchestrator | 2026-03-16 00:13:02.003568 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-16 00:13:05.012373 | orchestrator | ok: [testbed-manager] 2026-03-16 00:13:05.012444 | orchestrator | 2026-03-16 00:13:05.012532 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-16 00:13:05.059189 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:13:05.059238 | orchestrator | 2026-03-16 00:13:05.059247 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-16 00:13:05.102671 | orchestrator | ok: [testbed-manager] 2026-03-16 00:13:05.102800 | orchestrator | 2026-03-16 00:13:05.102811 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-16 00:13:05.147881 | orchestrator | ok: [testbed-manager] 2026-03-16 00:13:05.147924 | orchestrator | 2026-03-16 00:13:05.147931 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-16 00:13:05.212573 | orchestrator | ok: [testbed-manager] 2026-03-16 00:13:05.212614 | orchestrator | 2026-03-16 00:13:05.212621 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-16 00:13:05.270342 | orchestrator | ok: [testbed-manager] 2026-03-16 00:13:05.270390 | orchestrator | 2026-03-16 00:13:05.270404 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-16 00:13:05.313301 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-16 00:13:05.313364 | orchestrator | 2026-03-16 00:13:05.313379 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-16 00:13:06.049467 | orchestrator | ok: [testbed-manager] 2026-03-16 00:13:06.049521 | orchestrator | 2026-03-16 00:13:06.049530 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-16 00:13:06.088195 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:13:06.088319 | orchestrator | 2026-03-16 00:13:06.088325 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-16 00:13:07.546352 | orchestrator | changed: [testbed-manager] 2026-03-16 00:13:07.546409 | orchestrator | 2026-03-16 00:13:07.546418 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-16 00:13:08.173641 | orchestrator | ok: [testbed-manager] 2026-03-16 00:13:08.173869 | orchestrator | 2026-03-16 00:13:08.173884 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-16 00:13:09.414085 | orchestrator | changed: [testbed-manager] 2026-03-16 00:13:09.414123 | orchestrator | 2026-03-16 00:13:09.414132 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-16 00:13:26.081861 | orchestrator | changed: [testbed-manager] 2026-03-16 00:13:26.081936 | orchestrator | 2026-03-16 00:13:26.081952 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-16 00:13:26.748729 | orchestrator | ok: [testbed-manager] 2026-03-16 00:13:26.749409 | orchestrator | 2026-03-16 00:13:26.749446 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-16 00:13:26.800993 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:13:26.801054 | orchestrator | 2026-03-16 00:13:26.801069 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-16 00:13:27.812413 | orchestrator | changed: [testbed-manager] 2026-03-16 00:13:27.812470 | orchestrator | 2026-03-16 00:13:27.812503 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-16 00:13:28.825784 | orchestrator | changed: [testbed-manager] 2026-03-16 00:13:28.825851 | orchestrator | 2026-03-16 00:13:28.825868 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-16 00:13:29.425454 | orchestrator | changed: [testbed-manager] 2026-03-16 00:13:29.425578 | orchestrator | 2026-03-16 00:13:29.425600 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-16 00:13:29.472891 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-16 00:13:29.472960 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-16 00:13:29.472967 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-16 00:13:29.472973 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-16 00:13:31.498214 | orchestrator | changed: [testbed-manager] 2026-03-16 00:13:31.498258 | orchestrator | 2026-03-16 00:13:31.498265 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-16 00:13:40.904518 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-16 00:13:40.904558 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-16 00:13:40.904566 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-16 00:13:40.904573 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-16 00:13:40.904580 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-16 00:13:40.904586 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-16 00:13:40.904591 | orchestrator | 2026-03-16 00:13:40.904596 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-16 00:13:41.982930 | orchestrator | changed: [testbed-manager] 2026-03-16 00:13:41.983099 | orchestrator | 2026-03-16 00:13:41.983117 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-16 00:13:42.022074 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:13:42.022107 | orchestrator | 2026-03-16 00:13:42.022112 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-16 00:13:45.152735 | orchestrator | changed: [testbed-manager] 2026-03-16 00:13:45.152825 | orchestrator | 2026-03-16 00:13:45.152841 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-16 00:13:45.186805 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:13:45.186862 | orchestrator | 2026-03-16 00:13:45.186869 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-16 00:15:33.967001 | orchestrator | changed: [testbed-manager] 2026-03-16 00:15:33.967087 | orchestrator | 2026-03-16 00:15:33.967102 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-16 00:15:35.139338 | orchestrator | ok: [testbed-manager] 2026-03-16 00:15:35.139432 | orchestrator | 2026-03-16 00:15:35.139450 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:15:35.139466 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-16 00:15:35.139480 | orchestrator | 2026-03-16 00:15:35.717791 | orchestrator | ok: Runtime: 0:02:33.881106 2026-03-16 00:15:35.733984 | 2026-03-16 00:15:35.734134 | TASK [Reboot manager] 2026-03-16 00:15:37.283360 | orchestrator | ok: Runtime: 0:00:00.954621 2026-03-16 00:15:37.299939 | 2026-03-16 00:15:37.300107 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-16 00:15:51.210549 | orchestrator | ok 2026-03-16 00:15:51.221654 | 2026-03-16 00:15:51.221793 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-16 00:16:51.270950 | orchestrator | ok 2026-03-16 00:16:51.283364 | 2026-03-16 00:16:51.283532 | TASK [Deploy manager + bootstrap nodes] 2026-03-16 00:16:53.728430 | orchestrator | 2026-03-16 00:16:53.728636 | orchestrator | # DEPLOY MANAGER 2026-03-16 00:16:53.728661 | orchestrator | 2026-03-16 00:16:53.728676 | orchestrator | + set -e 2026-03-16 00:16:53.728689 | orchestrator | + echo 2026-03-16 00:16:53.728703 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-16 00:16:53.728721 | orchestrator | + echo 2026-03-16 00:16:53.728782 | orchestrator | + cat /opt/manager-vars.sh 2026-03-16 00:16:53.731642 | orchestrator | export NUMBER_OF_NODES=6 2026-03-16 00:16:53.731712 | orchestrator | 2026-03-16 00:16:53.731722 | orchestrator | export CEPH_VERSION=reef 2026-03-16 00:16:53.731731 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-16 00:16:53.731739 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-16 00:16:53.731758 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-16 00:16:53.731765 | orchestrator | 2026-03-16 00:16:53.731776 | orchestrator | export ARA=false 2026-03-16 00:16:53.731783 | orchestrator | export DEPLOY_MODE=manager 2026-03-16 00:16:53.731793 | orchestrator | export TEMPEST=true 2026-03-16 00:16:53.731800 | orchestrator | export IS_ZUUL=true 2026-03-16 00:16:53.731806 | orchestrator | 2026-03-16 00:16:53.731817 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 00:16:53.731824 | orchestrator | export EXTERNAL_API=false 2026-03-16 00:16:53.731840 | orchestrator | 2026-03-16 00:16:53.731846 | orchestrator | export IMAGE_USER=ubuntu 2026-03-16 00:16:53.731855 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-16 00:16:53.731862 | orchestrator | 2026-03-16 00:16:53.731868 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-16 00:16:53.731890 | orchestrator | 2026-03-16 00:16:53.731897 | orchestrator | + echo 2026-03-16 00:16:53.731905 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-16 00:16:53.732365 | orchestrator | ++ export INTERACTIVE=false 2026-03-16 00:16:53.732377 | orchestrator | ++ INTERACTIVE=false 2026-03-16 00:16:53.732384 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-16 00:16:53.732391 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-16 00:16:53.732593 | orchestrator | + source /opt/manager-vars.sh 2026-03-16 00:16:53.732611 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-16 00:16:53.732618 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-16 00:16:53.732627 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-16 00:16:53.732634 | orchestrator | ++ CEPH_VERSION=reef 2026-03-16 00:16:53.732640 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-16 00:16:53.732646 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-16 00:16:53.732653 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-16 00:16:53.732659 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-16 00:16:53.732665 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-16 00:16:53.732694 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-16 00:16:53.732704 | orchestrator | ++ export ARA=false 2026-03-16 00:16:53.732751 | orchestrator | ++ ARA=false 2026-03-16 00:16:53.732758 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-16 00:16:53.732765 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-16 00:16:53.732771 | orchestrator | ++ export TEMPEST=true 2026-03-16 00:16:53.732777 | orchestrator | ++ TEMPEST=true 2026-03-16 00:16:53.732783 | orchestrator | ++ export IS_ZUUL=true 2026-03-16 00:16:53.732790 | orchestrator | ++ IS_ZUUL=true 2026-03-16 00:16:53.732798 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 00:16:53.732805 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 00:16:53.732811 | orchestrator | ++ export EXTERNAL_API=false 2026-03-16 00:16:53.732817 | orchestrator | ++ EXTERNAL_API=false 2026-03-16 00:16:53.732824 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-16 00:16:53.732829 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-16 00:16:53.732836 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-16 00:16:53.732842 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-16 00:16:53.732848 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-16 00:16:53.732854 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-16 00:16:53.732860 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-16 00:16:53.786098 | orchestrator | + docker version 2026-03-16 00:16:53.915861 | orchestrator | Client: Docker Engine - Community 2026-03-16 00:16:53.915973 | orchestrator | Version: 27.5.1 2026-03-16 00:16:53.915997 | orchestrator | API version: 1.47 2026-03-16 00:16:53.916018 | orchestrator | Go version: go1.22.11 2026-03-16 00:16:53.916036 | orchestrator | Git commit: 9f9e405 2026-03-16 00:16:53.916056 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-16 00:16:53.916076 | orchestrator | OS/Arch: linux/amd64 2026-03-16 00:16:53.916095 | orchestrator | Context: default 2026-03-16 00:16:53.916111 | orchestrator | 2026-03-16 00:16:53.916123 | orchestrator | Server: Docker Engine - Community 2026-03-16 00:16:53.916134 | orchestrator | Engine: 2026-03-16 00:16:53.916145 | orchestrator | Version: 27.5.1 2026-03-16 00:16:53.916157 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-16 00:16:53.916199 | orchestrator | Go version: go1.22.11 2026-03-16 00:16:53.916211 | orchestrator | Git commit: 4c9b3b0 2026-03-16 00:16:53.916222 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-16 00:16:53.916233 | orchestrator | OS/Arch: linux/amd64 2026-03-16 00:16:53.916244 | orchestrator | Experimental: false 2026-03-16 00:16:53.916254 | orchestrator | containerd: 2026-03-16 00:16:53.916265 | orchestrator | Version: v2.2.2 2026-03-16 00:16:53.916277 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-16 00:16:53.916288 | orchestrator | runc: 2026-03-16 00:16:53.916299 | orchestrator | Version: 1.3.4 2026-03-16 00:16:53.916310 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-16 00:16:53.916321 | orchestrator | docker-init: 2026-03-16 00:16:53.916332 | orchestrator | Version: 0.19.0 2026-03-16 00:16:53.916344 | orchestrator | GitCommit: de40ad0 2026-03-16 00:16:53.918435 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-16 00:16:53.925752 | orchestrator | + set -e 2026-03-16 00:16:53.925784 | orchestrator | + source /opt/manager-vars.sh 2026-03-16 00:16:53.925789 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-16 00:16:53.925795 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-16 00:16:53.925799 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-16 00:16:53.925803 | orchestrator | ++ CEPH_VERSION=reef 2026-03-16 00:16:53.925807 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-16 00:16:53.925812 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-16 00:16:53.925816 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-16 00:16:53.925820 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-16 00:16:53.925824 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-16 00:16:53.925828 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-16 00:16:53.925832 | orchestrator | ++ export ARA=false 2026-03-16 00:16:53.925836 | orchestrator | ++ ARA=false 2026-03-16 00:16:53.925840 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-16 00:16:53.925844 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-16 00:16:53.925848 | orchestrator | ++ export TEMPEST=true 2026-03-16 00:16:53.925851 | orchestrator | ++ TEMPEST=true 2026-03-16 00:16:53.925855 | orchestrator | ++ export IS_ZUUL=true 2026-03-16 00:16:53.925859 | orchestrator | ++ IS_ZUUL=true 2026-03-16 00:16:53.925863 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 00:16:53.925867 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 00:16:53.925870 | orchestrator | ++ export EXTERNAL_API=false 2026-03-16 00:16:53.925874 | orchestrator | ++ EXTERNAL_API=false 2026-03-16 00:16:53.925878 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-16 00:16:53.925882 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-16 00:16:53.925885 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-16 00:16:53.925889 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-16 00:16:53.925893 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-16 00:16:53.925897 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-16 00:16:53.925901 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-16 00:16:53.925909 | orchestrator | ++ export INTERACTIVE=false 2026-03-16 00:16:53.925913 | orchestrator | ++ INTERACTIVE=false 2026-03-16 00:16:53.925917 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-16 00:16:53.925923 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-16 00:16:53.925927 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-16 00:16:53.925930 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-16 00:16:53.929044 | orchestrator | + set -e 2026-03-16 00:16:53.929115 | orchestrator | + VERSION=9.5.0 2026-03-16 00:16:53.929122 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-16 00:16:53.935996 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-16 00:16:53.936005 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-16 00:16:53.940221 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-16 00:16:53.944718 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-16 00:16:53.953108 | orchestrator | /opt/configuration ~ 2026-03-16 00:16:53.953122 | orchestrator | + set -e 2026-03-16 00:16:53.953127 | orchestrator | + pushd /opt/configuration 2026-03-16 00:16:53.953132 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-16 00:16:53.956182 | orchestrator | + source /opt/venv/bin/activate 2026-03-16 00:16:53.957132 | orchestrator | ++ deactivate nondestructive 2026-03-16 00:16:53.957140 | orchestrator | ++ '[' -n '' ']' 2026-03-16 00:16:53.957151 | orchestrator | ++ '[' -n '' ']' 2026-03-16 00:16:53.957179 | orchestrator | ++ hash -r 2026-03-16 00:16:53.957185 | orchestrator | ++ '[' -n '' ']' 2026-03-16 00:16:53.957189 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-16 00:16:53.957225 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-16 00:16:53.957231 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-16 00:16:53.957474 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-16 00:16:53.957481 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-16 00:16:53.957485 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-16 00:16:53.957489 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-16 00:16:53.957585 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-16 00:16:53.957592 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-16 00:16:53.957655 | orchestrator | ++ export PATH 2026-03-16 00:16:53.957754 | orchestrator | ++ '[' -n '' ']' 2026-03-16 00:16:53.957829 | orchestrator | ++ '[' -z '' ']' 2026-03-16 00:16:53.957835 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-16 00:16:53.957889 | orchestrator | ++ PS1='(venv) ' 2026-03-16 00:16:53.957894 | orchestrator | ++ export PS1 2026-03-16 00:16:53.957898 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-16 00:16:53.957902 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-16 00:16:53.957949 | orchestrator | ++ hash -r 2026-03-16 00:16:53.958162 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-16 00:16:54.924212 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-16 00:16:54.925142 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-16 00:16:54.926565 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-16 00:16:54.928187 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-16 00:16:54.929127 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-16 00:16:54.939274 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-16 00:16:54.940743 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-16 00:16:54.941871 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-16 00:16:54.943340 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-16 00:16:54.976058 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-16 00:16:54.977597 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-16 00:16:54.979330 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-16 00:16:54.980626 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-16 00:16:54.984603 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-16 00:16:55.198574 | orchestrator | ++ which gilt 2026-03-16 00:16:55.202413 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-16 00:16:55.202496 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-16 00:16:55.421042 | orchestrator | osism.cfg-generics: 2026-03-16 00:16:55.564781 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-16 00:16:55.564890 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-16 00:16:55.565174 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-16 00:16:55.565334 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-16 00:16:56.424187 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-16 00:16:56.433872 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-16 00:16:56.889098 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-16 00:16:56.941988 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-16 00:16:56.942129 | orchestrator | + deactivate 2026-03-16 00:16:56.942146 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-16 00:16:56.942158 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-16 00:16:56.942167 | orchestrator | + export PATH 2026-03-16 00:16:56.942178 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-16 00:16:56.942188 | orchestrator | + '[' -n '' ']' 2026-03-16 00:16:56.942201 | orchestrator | + hash -r 2026-03-16 00:16:56.942211 | orchestrator | + '[' -n '' ']' 2026-03-16 00:16:56.942220 | orchestrator | + unset VIRTUAL_ENV 2026-03-16 00:16:56.942230 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-16 00:16:56.942251 | orchestrator | ~ 2026-03-16 00:16:56.942262 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-16 00:16:56.942272 | orchestrator | + unset -f deactivate 2026-03-16 00:16:56.942282 | orchestrator | + popd 2026-03-16 00:16:56.943886 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-16 00:16:56.943925 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-16 00:16:56.944391 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-16 00:16:56.996669 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-16 00:16:56.996768 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-16 00:16:56.997625 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-16 00:16:57.056217 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-16 00:16:57.056880 | orchestrator | ++ semver 2024.2 2025.1 2026-03-16 00:16:57.116232 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-16 00:16:57.116334 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-16 00:16:57.204891 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-16 00:16:57.205078 | orchestrator | + source /opt/venv/bin/activate 2026-03-16 00:16:57.205108 | orchestrator | ++ deactivate nondestructive 2026-03-16 00:16:57.205130 | orchestrator | ++ '[' -n '' ']' 2026-03-16 00:16:57.205184 | orchestrator | ++ '[' -n '' ']' 2026-03-16 00:16:57.205219 | orchestrator | ++ hash -r 2026-03-16 00:16:57.205269 | orchestrator | ++ '[' -n '' ']' 2026-03-16 00:16:57.205305 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-16 00:16:57.205325 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-16 00:16:57.205345 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-16 00:16:57.205393 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-16 00:16:57.205414 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-16 00:16:57.205429 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-16 00:16:57.205464 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-16 00:16:57.205485 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-16 00:16:57.205561 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-16 00:16:57.205586 | orchestrator | ++ export PATH 2026-03-16 00:16:57.205604 | orchestrator | ++ '[' -n '' ']' 2026-03-16 00:16:57.205630 | orchestrator | ++ '[' -z '' ']' 2026-03-16 00:16:57.205642 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-16 00:16:57.205653 | orchestrator | ++ PS1='(venv) ' 2026-03-16 00:16:57.205664 | orchestrator | ++ export PS1 2026-03-16 00:16:57.205674 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-16 00:16:57.205685 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-16 00:16:57.205696 | orchestrator | ++ hash -r 2026-03-16 00:16:57.205707 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-16 00:16:58.317695 | orchestrator | 2026-03-16 00:16:58.317783 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-16 00:16:58.317797 | orchestrator | 2026-03-16 00:16:58.317809 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-16 00:16:58.891688 | orchestrator | ok: [testbed-manager] 2026-03-16 00:16:58.891794 | orchestrator | 2026-03-16 00:16:58.891811 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-16 00:16:59.861296 | orchestrator | changed: [testbed-manager] 2026-03-16 00:16:59.861392 | orchestrator | 2026-03-16 00:16:59.861436 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-16 00:16:59.861492 | orchestrator | 2026-03-16 00:16:59.861505 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-16 00:17:02.089280 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:02.089377 | orchestrator | 2026-03-16 00:17:02.089393 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-16 00:17:02.145042 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:02.145126 | orchestrator | 2026-03-16 00:17:02.145141 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-16 00:17:02.604658 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:02.604753 | orchestrator | 2026-03-16 00:17:02.604772 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-16 00:17:02.648090 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:17:02.648189 | orchestrator | 2026-03-16 00:17:02.648212 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-16 00:17:02.987206 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:02.987298 | orchestrator | 2026-03-16 00:17:02.987315 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-16 00:17:03.309141 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:03.309260 | orchestrator | 2026-03-16 00:17:03.309295 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-16 00:17:03.419697 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:17:03.419788 | orchestrator | 2026-03-16 00:17:03.419805 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-16 00:17:03.419818 | orchestrator | 2026-03-16 00:17:03.419830 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-16 00:17:05.172691 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:05.172790 | orchestrator | 2026-03-16 00:17:05.172809 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-16 00:17:05.272131 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-16 00:17:05.272223 | orchestrator | 2026-03-16 00:17:05.272240 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-16 00:17:05.328052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-16 00:17:05.328141 | orchestrator | 2026-03-16 00:17:05.328158 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-16 00:17:06.426637 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-16 00:17:06.426756 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-16 00:17:06.426783 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-16 00:17:06.426803 | orchestrator | 2026-03-16 00:17:06.426828 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-16 00:17:08.210232 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-16 00:17:08.210331 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-16 00:17:08.210347 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-16 00:17:08.210359 | orchestrator | 2026-03-16 00:17:08.210371 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-16 00:17:08.790326 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-16 00:17:08.790451 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:08.790479 | orchestrator | 2026-03-16 00:17:08.790498 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-16 00:17:09.368808 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-16 00:17:09.368910 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:09.368926 | orchestrator | 2026-03-16 00:17:09.368937 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-16 00:17:09.419750 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:17:09.419834 | orchestrator | 2026-03-16 00:17:09.419848 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-16 00:17:09.739971 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:09.740064 | orchestrator | 2026-03-16 00:17:09.740085 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-16 00:17:09.806418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-16 00:17:09.806566 | orchestrator | 2026-03-16 00:17:09.806588 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-16 00:17:10.737872 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:10.737981 | orchestrator | 2026-03-16 00:17:10.737998 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-16 00:17:11.435067 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:11.435162 | orchestrator | 2026-03-16 00:17:11.435179 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-16 00:17:24.715995 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:24.716080 | orchestrator | 2026-03-16 00:17:24.716090 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-16 00:17:24.755993 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:17:24.756076 | orchestrator | 2026-03-16 00:17:24.756109 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-16 00:17:24.756122 | orchestrator | 2026-03-16 00:17:24.756132 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-16 00:17:26.409262 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:26.409356 | orchestrator | 2026-03-16 00:17:26.409373 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-16 00:17:26.515747 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-16 00:17:26.515838 | orchestrator | 2026-03-16 00:17:26.515853 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-16 00:17:26.570400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-16 00:17:26.570487 | orchestrator | 2026-03-16 00:17:26.570558 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-16 00:17:28.755833 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:28.755935 | orchestrator | 2026-03-16 00:17:28.755952 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-16 00:17:28.807542 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:28.807642 | orchestrator | 2026-03-16 00:17:28.807660 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-16 00:17:28.933242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-16 00:17:28.933333 | orchestrator | 2026-03-16 00:17:28.933349 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-16 00:17:31.491966 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-16 00:17:31.492085 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-16 00:17:31.492111 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-16 00:17:31.492131 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-16 00:17:31.492145 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-16 00:17:31.492157 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-16 00:17:31.492168 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-16 00:17:31.492179 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-16 00:17:31.492190 | orchestrator | 2026-03-16 00:17:31.492202 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-16 00:17:32.080671 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:32.080771 | orchestrator | 2026-03-16 00:17:32.080792 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-16 00:17:32.648677 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:32.648797 | orchestrator | 2026-03-16 00:17:32.648814 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-16 00:17:32.727027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-16 00:17:32.727131 | orchestrator | 2026-03-16 00:17:32.727148 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-16 00:17:33.918430 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-16 00:17:33.918560 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-16 00:17:33.918578 | orchestrator | 2026-03-16 00:17:33.918592 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-16 00:17:34.564387 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:34.564504 | orchestrator | 2026-03-16 00:17:34.564520 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-16 00:17:34.622272 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:17:34.622371 | orchestrator | 2026-03-16 00:17:34.622387 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-16 00:17:34.696928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-16 00:17:34.697027 | orchestrator | 2026-03-16 00:17:34.697042 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-16 00:17:35.296397 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:35.296467 | orchestrator | 2026-03-16 00:17:35.296473 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-16 00:17:35.357894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-16 00:17:35.357970 | orchestrator | 2026-03-16 00:17:35.357981 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-16 00:17:36.689499 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-16 00:17:36.689585 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-16 00:17:36.689597 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:36.689607 | orchestrator | 2026-03-16 00:17:36.689616 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-16 00:17:37.258333 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:37.258432 | orchestrator | 2026-03-16 00:17:37.258449 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-16 00:17:37.307149 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:17:37.307241 | orchestrator | 2026-03-16 00:17:37.307257 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-16 00:17:37.386652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-16 00:17:37.386746 | orchestrator | 2026-03-16 00:17:37.386761 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-16 00:17:37.833299 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:37.834329 | orchestrator | 2026-03-16 00:17:37.834369 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-16 00:17:38.207318 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:38.207435 | orchestrator | 2026-03-16 00:17:38.207462 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-16 00:17:39.290720 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-16 00:17:39.290818 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-16 00:17:39.290834 | orchestrator | 2026-03-16 00:17:39.290848 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-16 00:17:39.892773 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:39.892870 | orchestrator | 2026-03-16 00:17:39.892888 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-16 00:17:40.263622 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:40.263718 | orchestrator | 2026-03-16 00:17:40.263735 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-16 00:17:40.581804 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:40.581898 | orchestrator | 2026-03-16 00:17:40.581916 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-16 00:17:40.619710 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:17:40.619804 | orchestrator | 2026-03-16 00:17:40.619820 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-16 00:17:40.692108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-16 00:17:40.692248 | orchestrator | 2026-03-16 00:17:40.692266 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-16 00:17:40.729689 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:40.729771 | orchestrator | 2026-03-16 00:17:40.729785 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-16 00:17:42.539049 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-16 00:17:42.539153 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-16 00:17:42.539169 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-16 00:17:42.539181 | orchestrator | 2026-03-16 00:17:42.539194 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-16 00:17:43.155286 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:43.155389 | orchestrator | 2026-03-16 00:17:43.155406 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-16 00:17:43.808734 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:43.808832 | orchestrator | 2026-03-16 00:17:43.808849 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-16 00:17:44.419792 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:44.419889 | orchestrator | 2026-03-16 00:17:44.419907 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-16 00:17:44.485358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-16 00:17:44.485449 | orchestrator | 2026-03-16 00:17:44.485520 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-16 00:17:44.526780 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:44.526867 | orchestrator | 2026-03-16 00:17:44.526882 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-16 00:17:45.225278 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-16 00:17:45.225376 | orchestrator | 2026-03-16 00:17:45.225392 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-16 00:17:45.321418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-16 00:17:45.321527 | orchestrator | 2026-03-16 00:17:45.321539 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-16 00:17:45.997656 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:45.997777 | orchestrator | 2026-03-16 00:17:45.997807 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-16 00:17:46.592615 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:46.592694 | orchestrator | 2026-03-16 00:17:46.592704 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-16 00:17:46.650691 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:17:46.650757 | orchestrator | 2026-03-16 00:17:46.650764 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-16 00:17:46.702242 | orchestrator | ok: [testbed-manager] 2026-03-16 00:17:46.702351 | orchestrator | 2026-03-16 00:17:46.702378 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-16 00:17:47.493001 | orchestrator | changed: [testbed-manager] 2026-03-16 00:17:47.493097 | orchestrator | 2026-03-16 00:17:47.493114 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-16 00:18:53.945406 | orchestrator | changed: [testbed-manager] 2026-03-16 00:18:53.945532 | orchestrator | 2026-03-16 00:18:53.945557 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-16 00:18:54.936930 | orchestrator | ok: [testbed-manager] 2026-03-16 00:18:54.937027 | orchestrator | 2026-03-16 00:18:54.937043 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-16 00:18:54.992689 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:18:54.992766 | orchestrator | 2026-03-16 00:18:54.992776 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-16 00:18:57.345376 | orchestrator | changed: [testbed-manager] 2026-03-16 00:18:57.345603 | orchestrator | 2026-03-16 00:18:57.345625 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-16 00:18:57.399105 | orchestrator | ok: [testbed-manager] 2026-03-16 00:18:57.399188 | orchestrator | 2026-03-16 00:18:57.399204 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-16 00:18:57.399217 | orchestrator | 2026-03-16 00:18:57.399228 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-16 00:18:57.595973 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:18:57.596033 | orchestrator | 2026-03-16 00:18:57.596042 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-16 00:19:57.647825 | orchestrator | Pausing for 60 seconds 2026-03-16 00:19:57.647930 | orchestrator | changed: [testbed-manager] 2026-03-16 00:19:57.647946 | orchestrator | 2026-03-16 00:19:57.647959 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-16 00:20:00.644745 | orchestrator | changed: [testbed-manager] 2026-03-16 00:20:00.644872 | orchestrator | 2026-03-16 00:20:00.644892 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-16 00:21:02.668799 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-16 00:21:02.668915 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-16 00:21:02.668952 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-16 00:21:02.668966 | orchestrator | changed: [testbed-manager] 2026-03-16 00:21:02.668980 | orchestrator | 2026-03-16 00:21:02.669020 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-16 00:21:12.498335 | orchestrator | changed: [testbed-manager] 2026-03-16 00:21:12.498435 | orchestrator | 2026-03-16 00:21:12.498450 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-16 00:21:12.578384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-16 00:21:12.578458 | orchestrator | 2026-03-16 00:21:12.578464 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-16 00:21:12.578470 | orchestrator | 2026-03-16 00:21:12.578474 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-16 00:21:12.625961 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:21:12.626137 | orchestrator | 2026-03-16 00:21:12.626157 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-16 00:21:12.698676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-16 00:21:12.698765 | orchestrator | 2026-03-16 00:21:12.698781 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-16 00:21:13.434223 | orchestrator | changed: [testbed-manager] 2026-03-16 00:21:13.434324 | orchestrator | 2026-03-16 00:21:13.434341 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-16 00:21:16.625505 | orchestrator | ok: [testbed-manager] 2026-03-16 00:21:16.625604 | orchestrator | 2026-03-16 00:21:16.625621 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-16 00:21:16.696063 | orchestrator | ok: [testbed-manager] => { 2026-03-16 00:21:16.696172 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-16 00:21:16.696196 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-16 00:21:16.696213 | orchestrator | "Checking running containers against expected versions...", 2026-03-16 00:21:16.696233 | orchestrator | "", 2026-03-16 00:21:16.696253 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-16 00:21:16.696272 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-16 00:21:16.696292 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.696313 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-16 00:21:16.696329 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.696340 | orchestrator | "", 2026-03-16 00:21:16.696352 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-16 00:21:16.696363 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-16 00:21:16.696402 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.696414 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-16 00:21:16.696425 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.696436 | orchestrator | "", 2026-03-16 00:21:16.696447 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-16 00:21:16.696458 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-16 00:21:16.696469 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.696480 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-16 00:21:16.696490 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.696501 | orchestrator | "", 2026-03-16 00:21:16.696511 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-16 00:21:16.696523 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-16 00:21:16.696533 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.696544 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-16 00:21:16.696555 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.696565 | orchestrator | "", 2026-03-16 00:21:16.696579 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-16 00:21:16.696590 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-16 00:21:16.696600 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.696611 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-16 00:21:16.696621 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.696632 | orchestrator | "", 2026-03-16 00:21:16.696643 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-16 00:21:16.696654 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.696664 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.696675 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.696686 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.696696 | orchestrator | "", 2026-03-16 00:21:16.696707 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-16 00:21:16.696718 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-16 00:21:16.696729 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.696740 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-16 00:21:16.696752 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.696762 | orchestrator | "", 2026-03-16 00:21:16.696773 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-16 00:21:16.696784 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-16 00:21:16.696795 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.696805 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-16 00:21:16.696816 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.696826 | orchestrator | "", 2026-03-16 00:21:16.696837 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-16 00:21:16.696848 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-16 00:21:16.696858 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.696869 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-16 00:21:16.696880 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.696890 | orchestrator | "", 2026-03-16 00:21:16.696901 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-16 00:21:16.696912 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-16 00:21:16.696922 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.696933 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-16 00:21:16.696944 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.696954 | orchestrator | "", 2026-03-16 00:21:16.696989 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-16 00:21:16.697001 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.697021 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.697031 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.697042 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.697052 | orchestrator | "", 2026-03-16 00:21:16.697063 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-16 00:21:16.697074 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.697085 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.697096 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.697106 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.697117 | orchestrator | "", 2026-03-16 00:21:16.697128 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-16 00:21:16.697139 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.697150 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.697160 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.697171 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.697182 | orchestrator | "", 2026-03-16 00:21:16.697192 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-16 00:21:16.697203 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.697214 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.697225 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.697253 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.697264 | orchestrator | "", 2026-03-16 00:21:16.697275 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-16 00:21:16.697286 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.697307 | orchestrator | " Enabled: true", 2026-03-16 00:21:16.697318 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-16 00:21:16.697329 | orchestrator | " Status: ✅ MATCH", 2026-03-16 00:21:16.697340 | orchestrator | "", 2026-03-16 00:21:16.697350 | orchestrator | "=== Summary ===", 2026-03-16 00:21:16.697361 | orchestrator | "Errors (version mismatches): 0", 2026-03-16 00:21:16.697373 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-16 00:21:16.697384 | orchestrator | "", 2026-03-16 00:21:16.697394 | orchestrator | "✅ All running containers match expected versions!" 2026-03-16 00:21:16.697405 | orchestrator | ] 2026-03-16 00:21:16.697416 | orchestrator | } 2026-03-16 00:21:16.697428 | orchestrator | 2026-03-16 00:21:16.697439 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-16 00:21:16.744276 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:21:16.744361 | orchestrator | 2026-03-16 00:21:16.744376 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:21:16.744390 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-16 00:21:16.744401 | orchestrator | 2026-03-16 00:21:16.837423 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-16 00:21:16.837525 | orchestrator | + deactivate 2026-03-16 00:21:16.837542 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-16 00:21:16.837555 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-16 00:21:16.837566 | orchestrator | + export PATH 2026-03-16 00:21:16.837577 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-16 00:21:16.837589 | orchestrator | + '[' -n '' ']' 2026-03-16 00:21:16.837600 | orchestrator | + hash -r 2026-03-16 00:21:16.837611 | orchestrator | + '[' -n '' ']' 2026-03-16 00:21:16.837622 | orchestrator | + unset VIRTUAL_ENV 2026-03-16 00:21:16.837633 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-16 00:21:16.837644 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-16 00:21:16.837655 | orchestrator | + unset -f deactivate 2026-03-16 00:21:16.837667 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-16 00:21:16.845218 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-16 00:21:16.845286 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-16 00:21:16.845326 | orchestrator | + local max_attempts=60 2026-03-16 00:21:16.845338 | orchestrator | + local name=ceph-ansible 2026-03-16 00:21:16.845350 | orchestrator | + local attempt_num=1 2026-03-16 00:21:16.846448 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:21:16.875949 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:21:16.876088 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-16 00:21:16.876102 | orchestrator | + local max_attempts=60 2026-03-16 00:21:16.876113 | orchestrator | + local name=kolla-ansible 2026-03-16 00:21:16.876123 | orchestrator | + local attempt_num=1 2026-03-16 00:21:16.876653 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-16 00:21:16.914279 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:21:16.914365 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-16 00:21:16.914380 | orchestrator | + local max_attempts=60 2026-03-16 00:21:16.914393 | orchestrator | + local name=osism-ansible 2026-03-16 00:21:16.914404 | orchestrator | + local attempt_num=1 2026-03-16 00:21:16.914669 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-16 00:21:16.947814 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:21:16.947898 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-16 00:21:16.947927 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-16 00:21:17.624460 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-16 00:21:17.817532 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-16 00:21:17.817617 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-16 00:21:17.817629 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-16 00:21:17.817637 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-16 00:21:17.817647 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-16 00:21:17.817673 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-16 00:21:17.817682 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-16 00:21:17.817690 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-16 00:21:17.817697 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-16 00:21:17.817705 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-16 00:21:17.817713 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-16 00:21:17.817721 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-16 00:21:17.817729 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-16 00:21:17.817756 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-16 00:21:17.817765 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-16 00:21:17.817774 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-16 00:21:17.822139 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-16 00:21:17.861924 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-16 00:21:17.862143 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-16 00:21:17.864298 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-16 00:21:30.098561 | orchestrator | 2026-03-16 00:21:30 | INFO  | Task 467a7e24-a55d-4c4d-b395-fbb7404b0a4c (resolvconf) was prepared for execution. 2026-03-16 00:21:30.098718 | orchestrator | 2026-03-16 00:21:30 | INFO  | It takes a moment until task 467a7e24-a55d-4c4d-b395-fbb7404b0a4c (resolvconf) has been started and output is visible here. 2026-03-16 00:21:44.208682 | orchestrator | 2026-03-16 00:21:44.208783 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-16 00:21:44.208803 | orchestrator | 2026-03-16 00:21:44.208817 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-16 00:21:44.208832 | orchestrator | Monday 16 March 2026 00:21:34 +0000 (0:00:00.137) 0:00:00.137 ********** 2026-03-16 00:21:44.208845 | orchestrator | ok: [testbed-manager] 2026-03-16 00:21:44.208860 | orchestrator | 2026-03-16 00:21:44.208875 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-16 00:21:44.208890 | orchestrator | Monday 16 March 2026 00:21:38 +0000 (0:00:03.763) 0:00:03.901 ********** 2026-03-16 00:21:44.208904 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:21:44.208945 | orchestrator | 2026-03-16 00:21:44.208959 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-16 00:21:44.208973 | orchestrator | Monday 16 March 2026 00:21:38 +0000 (0:00:00.064) 0:00:03.966 ********** 2026-03-16 00:21:44.208988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-16 00:21:44.209002 | orchestrator | 2026-03-16 00:21:44.209016 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-16 00:21:44.209030 | orchestrator | Monday 16 March 2026 00:21:38 +0000 (0:00:00.069) 0:00:04.035 ********** 2026-03-16 00:21:44.209057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-16 00:21:44.209071 | orchestrator | 2026-03-16 00:21:44.209085 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-16 00:21:44.209099 | orchestrator | Monday 16 March 2026 00:21:38 +0000 (0:00:00.068) 0:00:04.104 ********** 2026-03-16 00:21:44.209113 | orchestrator | ok: [testbed-manager] 2026-03-16 00:21:44.209126 | orchestrator | 2026-03-16 00:21:44.209140 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-16 00:21:44.209153 | orchestrator | Monday 16 March 2026 00:21:39 +0000 (0:00:01.139) 0:00:05.244 ********** 2026-03-16 00:21:44.209167 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:21:44.209180 | orchestrator | 2026-03-16 00:21:44.209194 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-16 00:21:44.209208 | orchestrator | Monday 16 March 2026 00:21:39 +0000 (0:00:00.068) 0:00:05.313 ********** 2026-03-16 00:21:44.209245 | orchestrator | ok: [testbed-manager] 2026-03-16 00:21:44.209259 | orchestrator | 2026-03-16 00:21:44.209272 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-16 00:21:44.209286 | orchestrator | Monday 16 March 2026 00:21:39 +0000 (0:00:00.522) 0:00:05.836 ********** 2026-03-16 00:21:44.209299 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:21:44.209313 | orchestrator | 2026-03-16 00:21:44.209326 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-16 00:21:44.209341 | orchestrator | Monday 16 March 2026 00:21:40 +0000 (0:00:00.077) 0:00:05.913 ********** 2026-03-16 00:21:44.209355 | orchestrator | changed: [testbed-manager] 2026-03-16 00:21:44.209369 | orchestrator | 2026-03-16 00:21:44.209383 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-16 00:21:44.209396 | orchestrator | Monday 16 March 2026 00:21:40 +0000 (0:00:00.548) 0:00:06.461 ********** 2026-03-16 00:21:44.209410 | orchestrator | changed: [testbed-manager] 2026-03-16 00:21:44.209424 | orchestrator | 2026-03-16 00:21:44.209438 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-16 00:21:44.209451 | orchestrator | Monday 16 March 2026 00:21:41 +0000 (0:00:01.104) 0:00:07.565 ********** 2026-03-16 00:21:44.209465 | orchestrator | ok: [testbed-manager] 2026-03-16 00:21:44.209479 | orchestrator | 2026-03-16 00:21:44.209492 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-16 00:21:44.209506 | orchestrator | Monday 16 March 2026 00:21:42 +0000 (0:00:01.006) 0:00:08.571 ********** 2026-03-16 00:21:44.209519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-16 00:21:44.209533 | orchestrator | 2026-03-16 00:21:44.209547 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-16 00:21:44.209561 | orchestrator | Monday 16 March 2026 00:21:42 +0000 (0:00:00.075) 0:00:08.647 ********** 2026-03-16 00:21:44.209574 | orchestrator | changed: [testbed-manager] 2026-03-16 00:21:44.209588 | orchestrator | 2026-03-16 00:21:44.209601 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:21:44.209616 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-16 00:21:44.209629 | orchestrator | 2026-03-16 00:21:44.209643 | orchestrator | 2026-03-16 00:21:44.209656 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:21:44.209670 | orchestrator | Monday 16 March 2026 00:21:43 +0000 (0:00:01.165) 0:00:09.812 ********** 2026-03-16 00:21:44.209683 | orchestrator | =============================================================================== 2026-03-16 00:21:44.209697 | orchestrator | Gathering Facts --------------------------------------------------------- 3.76s 2026-03-16 00:21:44.209711 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2026-03-16 00:21:44.209724 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.14s 2026-03-16 00:21:44.209738 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.10s 2026-03-16 00:21:44.209752 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.01s 2026-03-16 00:21:44.209765 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2026-03-16 00:21:44.209797 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.52s 2026-03-16 00:21:44.209811 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-16 00:21:44.209824 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-16 00:21:44.209837 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-03-16 00:21:44.209851 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-16 00:21:44.209865 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-03-16 00:21:44.209886 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-16 00:21:44.492209 | orchestrator | + osism apply sshconfig 2026-03-16 00:21:56.464327 | orchestrator | 2026-03-16 00:21:56 | INFO  | Task 275b629b-723b-4e81-800b-8b811cde51d0 (sshconfig) was prepared for execution. 2026-03-16 00:21:56.464451 | orchestrator | 2026-03-16 00:21:56 | INFO  | It takes a moment until task 275b629b-723b-4e81-800b-8b811cde51d0 (sshconfig) has been started and output is visible here. 2026-03-16 00:22:08.183100 | orchestrator | 2026-03-16 00:22:08.183206 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-16 00:22:08.183222 | orchestrator | 2026-03-16 00:22:08.183254 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-16 00:22:08.183267 | orchestrator | Monday 16 March 2026 00:22:00 +0000 (0:00:00.183) 0:00:00.183 ********** 2026-03-16 00:22:08.183279 | orchestrator | ok: [testbed-manager] 2026-03-16 00:22:08.183291 | orchestrator | 2026-03-16 00:22:08.183303 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-16 00:22:08.183314 | orchestrator | Monday 16 March 2026 00:22:01 +0000 (0:00:00.526) 0:00:00.710 ********** 2026-03-16 00:22:08.183325 | orchestrator | changed: [testbed-manager] 2026-03-16 00:22:08.183337 | orchestrator | 2026-03-16 00:22:08.183348 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-16 00:22:08.183359 | orchestrator | Monday 16 March 2026 00:22:01 +0000 (0:00:00.509) 0:00:01.220 ********** 2026-03-16 00:22:08.183370 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-16 00:22:08.183381 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-16 00:22:08.183392 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-16 00:22:08.183403 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-16 00:22:08.183414 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-16 00:22:08.183425 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-16 00:22:08.183435 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-16 00:22:08.183446 | orchestrator | 2026-03-16 00:22:08.183457 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-16 00:22:08.183468 | orchestrator | Monday 16 March 2026 00:22:07 +0000 (0:00:05.627) 0:00:06.847 ********** 2026-03-16 00:22:08.183479 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:22:08.183490 | orchestrator | 2026-03-16 00:22:08.183501 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-16 00:22:08.183512 | orchestrator | Monday 16 March 2026 00:22:07 +0000 (0:00:00.070) 0:00:06.917 ********** 2026-03-16 00:22:08.183522 | orchestrator | changed: [testbed-manager] 2026-03-16 00:22:08.183533 | orchestrator | 2026-03-16 00:22:08.183544 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:22:08.183556 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:22:08.183568 | orchestrator | 2026-03-16 00:22:08.183579 | orchestrator | 2026-03-16 00:22:08.183590 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:22:08.183601 | orchestrator | Monday 16 March 2026 00:22:07 +0000 (0:00:00.563) 0:00:07.481 ********** 2026-03-16 00:22:08.183612 | orchestrator | =============================================================================== 2026-03-16 00:22:08.183623 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.63s 2026-03-16 00:22:08.183633 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-03-16 00:22:08.183644 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.53s 2026-03-16 00:22:08.183656 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2026-03-16 00:22:08.183669 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-03-16 00:22:08.487214 | orchestrator | + osism apply known-hosts 2026-03-16 00:22:20.556110 | orchestrator | 2026-03-16 00:22:20 | INFO  | Task 9d9d774a-5ed5-4a03-b622-9e9084b071ec (known-hosts) was prepared for execution. 2026-03-16 00:22:20.556220 | orchestrator | 2026-03-16 00:22:20 | INFO  | It takes a moment until task 9d9d774a-5ed5-4a03-b622-9e9084b071ec (known-hosts) has been started and output is visible here. 2026-03-16 00:22:36.196531 | orchestrator | 2026-03-16 00:22:36.196641 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-16 00:22:36.196658 | orchestrator | 2026-03-16 00:22:36.196669 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-16 00:22:36.196681 | orchestrator | Monday 16 March 2026 00:22:24 +0000 (0:00:00.119) 0:00:00.119 ********** 2026-03-16 00:22:36.196693 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-16 00:22:36.196705 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-16 00:22:36.196716 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-16 00:22:36.196727 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-16 00:22:36.196738 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-16 00:22:36.196749 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-16 00:22:36.196759 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-16 00:22:36.196770 | orchestrator | 2026-03-16 00:22:36.196781 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-16 00:22:36.196793 | orchestrator | Monday 16 March 2026 00:22:29 +0000 (0:00:05.692) 0:00:05.811 ********** 2026-03-16 00:22:36.196805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-16 00:22:36.196862 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-16 00:22:36.196883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-16 00:22:36.196901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-16 00:22:36.196918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-16 00:22:36.196950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-16 00:22:36.196969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-16 00:22:36.196987 | orchestrator | 2026-03-16 00:22:36.197005 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:36.197023 | orchestrator | Monday 16 March 2026 00:22:30 +0000 (0:00:00.135) 0:00:05.947 ********** 2026-03-16 00:22:36.197041 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNv47la+iC2uCEbOYjOZEYMQNnxst43xAxaRiru5Tq8AnzVo4NWkbVEQnrGUgqbv1ylsTOVz/PhM7NgItZxZKO8=) 2026-03-16 00:22:36.197071 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7Zb28Uy8DFm3wEnV4pMkrhtE95ImMKKo+tHJidP/XZw6S/2Tr3Suu7YfOVcos/lkCAUm94uewzGbe4LwwbNPmHM+PuHlWGEgfVKCnjPsJN1cmF4fXSCIzXSsdc6yT16pqA8BR9CkmmbvXj1wgk1cOQWmKVy5dfbA5Jnit7xYpiF/4QLUFje0r+iGY4EGT9VxIJ6fxpHPiSwfQLzC9coZvmEYkpXovnBxMQ8YteJjWhNiPPLk9TOIrACvAOFrwXNKqD593N8Vq9hz2xzYlUI3kIU1nG7U/4RXPSmpgSe1T6/3ibLVdodPa0OSVldXNWLgOpRgKepidyXFOBbV0u7ml0rKb+OwD61Mh75zeiSRLu3QudUSISzzVJgGv4PlLTzg2yMVafUshejw9dpU69iRL3ZaHv+xK1X1QamR9fS4313cVZz1ldYTpyGhE600Vpv1DWuXnZMBNezuaczKZfrsHjBSBoz3J8J70jhfSk3JN0dkc/7WZ5GXLYr0eZaZ9Ya8=) 2026-03-16 00:22:36.197130 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPsjtqzhH61VepFP3PtnE87jdHiDgq1ub2zsAFZhy/kM) 2026-03-16 00:22:36.197154 | orchestrator | 2026-03-16 00:22:36.197176 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:36.197196 | orchestrator | Monday 16 March 2026 00:22:31 +0000 (0:00:01.036) 0:00:06.983 ********** 2026-03-16 00:22:36.197244 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9r0DdpmjCv275n2HBixVxmdkyLYBRWRavt8NUBDeNMAFQdodIeMEuUOQmBBH3PhnEXkLmDBaEPspvzhIICsE2gYyvGnXAuWAZMYrBgQidMsDJYjPZ6gzE0E+04rne2Z8ouNYcf39rCoAgKQfIROwHnUsuDp476qD9daHr2j5n2oM3tYtrTiU+AkXP1GaW4R+rketRfmrSqDcu6p6/bSfkJtn2Gj85D3trSjCKk4CfVcgK0cSzWjuholOjPYaUeAgFQ2OHHzDZe0/sNLksXppK8eKqlmsfQ+tUg8gruEBipLojcWXnBjtUe7LUwLKe2IqHKVCfy5/PyJLuAKvLarVNJqlfhNdPhAU95YMhPyipkHn9tUvYABpP2l1zUYCUIfpFmPaVzKeIiSp1cmH4QWpU1wTL31QL5khzwRjUlGfTUsS/gcLlErrxpqcW7ToE0MlXlG08mI4QzgiQNq0m3KaN+TtjRxZ51GxTfsH0mZ2mckfoQNeQdpyI0/Yl6sF+Xdc=) 2026-03-16 00:22:36.197266 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLUOz1DXe4sd1Bknm/MPyzCLRFkBUycCL6mEM9zs1tzs9kFaw+hcegj+Jm3ZLterTbMkqD6abZbQNrPyzl4pc/Q=) 2026-03-16 00:22:36.197286 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPmZU+q+P70MzXkFp80bl/TWKmnUVRo5QbLdVQ9/wqvH) 2026-03-16 00:22:36.197306 | orchestrator | 2026-03-16 00:22:36.197324 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:36.197341 | orchestrator | Monday 16 March 2026 00:22:32 +0000 (0:00:00.934) 0:00:07.917 ********** 2026-03-16 00:22:36.197361 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8TAN/yCczhCFmSfnXhX4eHCKHEImqazbP35NE8RPHAiVIaC4EU4Pea3VhAd7B6J3UnXNT4Dh8NryvqLHbM4vCp/xJNQ7gYA7asUBGQWhdQJOu9yLrhwOpZBduXbjOyq9aaoXIj/IWE0rP1z5AXzn4kuU4oYF/Vnj18RYAimUa77I7OjCYAs9tbAhhFRFO73zZbZ3gETZ/qn3W5YuX9I7T+aDyU4w3lLdYNVaohapRKRvFSXvNReD13IDt2f3d5sHavxFxdXpd4jeBvxt7vTziAPMyEbYmmoxFFjZ5+fewXWJy5WcrXOxeYiqgHc8cTyBCwlqpJ2jhexnvHmc97fnnwOLWdRQM1PCWZsHjAR2hFdXwl1xc+YpXFnY3A/ToOKugaTMaLdNZmlD5Y+nUUgA2MPKECuqajILE+2g4HGvvR+/kf6SY4T2AUehiQr5DXbB+pGIQqQpGSpukUKGsl6Vu3NV0I4sOt+0PbnH+6427rkKhHjfRSKYLcJxLqOcZdhU=) 2026-03-16 00:22:36.197379 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN22Qt33zHnYAsK5QwmQu8LszKuJCG6pOZUIZdlzomS/WA6yHE4A7kQDxG8KYd8sCqg3SXyLq5Qfix3xeMHAYmo=) 2026-03-16 00:22:36.197398 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIENxJhNM03zZZAm2bwK65paJtl9WrnVnkUAaaLxnvZ5E) 2026-03-16 00:22:36.197417 | orchestrator | 2026-03-16 00:22:36.197437 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:36.197455 | orchestrator | Monday 16 March 2026 00:22:33 +0000 (0:00:00.978) 0:00:08.896 ********** 2026-03-16 00:22:36.197473 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwOPasIHOTGj2gHCJBTgYr0kWEfLHmQ5SAWrx1+mEu1jrupkatxTzE3cHhZovZRMLM83WCS1yG5Qk8hAi4w77mWQUb3w/WEfpC1Eeo87r3HQP9Y09JTjlBxU4DcfTrt0ipeOl9Kq7KsuNjMoKbUA22XtGvvJLs8ltfB/44qUTyIKvUJeCUPI8LHD8Z5tIkjTFWpHxo9GT4ulpXDQCyHzwOsF+75kKjD4fesWp0WZQk4RKE7I9iHn3vdlVizkOc9oOCT+XIvyLsgJiU26V8/M1Epck3lP/OghMIrFUMimA9ngbzppCY1nMK5doNxlFqWd5Z/lPR/p+ptaetnSwAYH8G152aDHU297TYnWhb2oDAGdYA4t7+uBGnpHQ8ZhrvpS056ufl5SXp4GEbAHjGbtENJUjypmC9mz3vNycyYaHNZ1gzy1lhmVUtbPLflEEL1UF0tNFLZ/5qv5ZPL4QDOWwAPbgjkWtkf3vR22dMTGCSDX55GLic0CbdrXGFyuCuELk=) 2026-03-16 00:22:36.197506 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOQimT0gKNPhIXtlpCoohRpr+Ja31vK3xX3kIBAB9/Sm6D7gvPoawE30tOFdmw/9LKWsK0PNYmn8o7An8ImwPBo=) 2026-03-16 00:22:36.197525 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBrPk1DbqGm7RYGzptP4XLQg1LzqHOlxF7mIv3fLgK7M) 2026-03-16 00:22:36.197543 | orchestrator | 2026-03-16 00:22:36.197561 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:36.197578 | orchestrator | Monday 16 March 2026 00:22:34 +0000 (0:00:01.058) 0:00:09.954 ********** 2026-03-16 00:22:36.197694 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPVpE1ifHgwbuYDAqX8Lo3quJs7Pr2SdfcngdOUgIrqZ3nwjQy2upn0/OcVZieWwaQIJV19ht9mRdBsrHWTv1k4=) 2026-03-16 00:22:36.197717 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJB1AC04Q9sJvGlL3yAqYRDemVX/XJC9fmQbEaDWs059) 2026-03-16 00:22:36.197736 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5O3kvIXgU5iEiY8HfpReVAGiOASEToGQk2xayECXQTSwk/AeaJtQ5mJpUDK/TQRCUjHWmG8l5USchENFhEJ2pHw/6+EIrJE4BYV5BAq6YdtbmxOlP6Grm/HyMSF01Abg6M1Lpg0O9nloYEChXk3oAuTChXWVaPiM+5sVvWIRYrURxax8FcQWKDyVOQN02UT5wVWilvQaBsFBUlat+YbFXUb5GMr5mI8uthXp2kmlpYE5NmR8mR4Ea6VhH1lJDHChzIvd6iYONyXRuI0g3e5TDRrNYNLSO9kdPJ5EY8C9s3E7Pm8dyRgJzIP7Hn8478gNOzP6oCO2ja80H1dEujax5rPxUiiH5rQc5CZCDGWjtXHpucJrQA/eUdKZRBkmyOvQMu61rAMCvV2F2yDf7wPjSz3ehWO5lP9/5Dy0oEh4EXczJP0kz9Cy6HhM852nZzWBbFpv3F7YsD1JGVH7QStAhSM4IQrGmDplTtpUmXFzytV9gv/K7oJE4L1Kf9Sjo5NU=) 2026-03-16 00:22:36.197755 | orchestrator | 2026-03-16 00:22:36.197771 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:36.197788 | orchestrator | Monday 16 March 2026 00:22:35 +0000 (0:00:01.034) 0:00:10.989 ********** 2026-03-16 00:22:36.197848 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEj12plTFi37cEgFAjM63/IYyA+LeyS59NZ53FFerGmO2MpyqVEIOAHbA9Yjji3rhtYSa6Ji7tpXtWHVIJVLr/Y=) 2026-03-16 00:22:46.692435 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINqxLuXApRfjQYxLOk1ZGhlXcS6jO3K1dt0uvaGwRbi5) 2026-03-16 00:22:46.692535 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuJvUfrzntVnGWF7tAP91KtP/KK2KDIYsEhlkPGkl0k+0mHgNtrDmuZeqhdBWnvyAPWsG2ZhT2MQeS0+p78SWN4tmNJGm3f/zWW75TVj6apkVXeYyE/zs7OPIy4OpcMaL6X2XIT34oLsFNdxS7ZLoemOBUkh2fKD1v695BYPJh9GQfRpJhOEGsFWUvZdvGmA0nv3oUid9iecphLXwVXAb7jKbDtTSTyDAgSV5GH+jS1XjCqhfnLcz1lgUZNq8J+6H+byLfJGxQ37t5EcAqtvCaquMU0Lr4d/RBqOvCs4S5br/fJ8MpoTTah1tCbqf3dL+2SksHRo8owv30ACLOHqFQWhrVbLhzt8sMoFYQDdUDB24jykqI0kSe0gRBKfMyf7x3+3gPmb8wiBbT5labOPuOml+cFvZ6wSvt71i/u18zVhAOBrSBanu1YIdj/cu5NReT4sGq3b4f1bUzHsnH1QqjKP63MowfLrbXbbDTagvibtCwCMPzsl+sxWUWRPhkHWU=) 2026-03-16 00:22:46.692552 | orchestrator | 2026-03-16 00:22:46.692565 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:46.692578 | orchestrator | Monday 16 March 2026 00:22:36 +0000 (0:00:01.011) 0:00:12.000 ********** 2026-03-16 00:22:46.692590 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCKqZd+p/YM1ZqgVgXKJsxOZo/+vhGGRD9btp9jWhjyUUq4NGmkfPXnzAjB6sf2jxTXiJbo20WIO7RbnBROrBeKiqmvOc9zAXgWJn7GpP/zUH/wurL3yTJ8COC+zYUxC3o8zSvRGvz35r40naMgiNNSFClkX1asbJNHV3HARN0MOGJuClJM+juFOerFCOSa1iaNc6B/M1AW4TXllyjsJlwog34AC5jLXddQRQtmFCDPyqAjdsO3LwTd9+SL9kZF6ibAN4YoGdRrSBtebobe73DTIjr8tSBmp9Hmep95iml6u92XBrO/RIYtPHIuJ/QCngZ/KLnmkzUFMDsa+VJgW2ux/PcOuQwigtJe6O5Ie/mOgkiD5wI0rDyZwGNZzGgN3wXW9rdYIB260ujXAgsFd4EITov+nWdP86Y3tXZ7w7pHr7n8jdcH9KfXH79/CP2Ls4VJA0/snnmBegP1vK3wg4Aoku8UOPiuLRpVIoPvdpVd7fnvMgRAdoEsY5kfshWoQXc=) 2026-03-16 00:22:46.692602 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP3w1cFP9cbQL51TltrKTwQF/gLtjK563CvFU79AtrhgHYxDtjb+sjUhdN6uNize4BmanOeOEgsIpF8YSx+HQSg=) 2026-03-16 00:22:46.692675 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIObixNO9JRa4d4RyRSKiejewgjEBbd3w3HCiRImKLD5x) 2026-03-16 00:22:46.692688 | orchestrator | 2026-03-16 00:22:46.692699 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-16 00:22:46.692711 | orchestrator | Monday 16 March 2026 00:22:37 +0000 (0:00:01.026) 0:00:13.027 ********** 2026-03-16 00:22:46.692723 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-16 00:22:46.692734 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-16 00:22:46.692745 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-16 00:22:46.692756 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-16 00:22:46.692766 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-16 00:22:46.692777 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-16 00:22:46.692788 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-16 00:22:46.692887 | orchestrator | 2026-03-16 00:22:46.692902 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-16 00:22:46.692914 | orchestrator | Monday 16 March 2026 00:22:42 +0000 (0:00:05.215) 0:00:18.242 ********** 2026-03-16 00:22:46.692926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-16 00:22:46.692939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-16 00:22:46.692951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-16 00:22:46.692961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-16 00:22:46.692973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-16 00:22:46.692985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-16 00:22:46.692998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-16 00:22:46.693012 | orchestrator | 2026-03-16 00:22:46.693041 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:46.693054 | orchestrator | Monday 16 March 2026 00:22:42 +0000 (0:00:00.171) 0:00:18.413 ********** 2026-03-16 00:22:46.693067 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNv47la+iC2uCEbOYjOZEYMQNnxst43xAxaRiru5Tq8AnzVo4NWkbVEQnrGUgqbv1ylsTOVz/PhM7NgItZxZKO8=) 2026-03-16 00:22:46.693102 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7Zb28Uy8DFm3wEnV4pMkrhtE95ImMKKo+tHJidP/XZw6S/2Tr3Suu7YfOVcos/lkCAUm94uewzGbe4LwwbNPmHM+PuHlWGEgfVKCnjPsJN1cmF4fXSCIzXSsdc6yT16pqA8BR9CkmmbvXj1wgk1cOQWmKVy5dfbA5Jnit7xYpiF/4QLUFje0r+iGY4EGT9VxIJ6fxpHPiSwfQLzC9coZvmEYkpXovnBxMQ8YteJjWhNiPPLk9TOIrACvAOFrwXNKqD593N8Vq9hz2xzYlUI3kIU1nG7U/4RXPSmpgSe1T6/3ibLVdodPa0OSVldXNWLgOpRgKepidyXFOBbV0u7ml0rKb+OwD61Mh75zeiSRLu3QudUSISzzVJgGv4PlLTzg2yMVafUshejw9dpU69iRL3ZaHv+xK1X1QamR9fS4313cVZz1ldYTpyGhE600Vpv1DWuXnZMBNezuaczKZfrsHjBSBoz3J8J70jhfSk3JN0dkc/7WZ5GXLYr0eZaZ9Ya8=) 2026-03-16 00:22:46.693126 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPsjtqzhH61VepFP3PtnE87jdHiDgq1ub2zsAFZhy/kM) 2026-03-16 00:22:46.693139 | orchestrator | 2026-03-16 00:22:46.693152 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:46.693165 | orchestrator | Monday 16 March 2026 00:22:43 +0000 (0:00:01.004) 0:00:19.418 ********** 2026-03-16 00:22:46.693177 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9r0DdpmjCv275n2HBixVxmdkyLYBRWRavt8NUBDeNMAFQdodIeMEuUOQmBBH3PhnEXkLmDBaEPspvzhIICsE2gYyvGnXAuWAZMYrBgQidMsDJYjPZ6gzE0E+04rne2Z8ouNYcf39rCoAgKQfIROwHnUsuDp476qD9daHr2j5n2oM3tYtrTiU+AkXP1GaW4R+rketRfmrSqDcu6p6/bSfkJtn2Gj85D3trSjCKk4CfVcgK0cSzWjuholOjPYaUeAgFQ2OHHzDZe0/sNLksXppK8eKqlmsfQ+tUg8gruEBipLojcWXnBjtUe7LUwLKe2IqHKVCfy5/PyJLuAKvLarVNJqlfhNdPhAU95YMhPyipkHn9tUvYABpP2l1zUYCUIfpFmPaVzKeIiSp1cmH4QWpU1wTL31QL5khzwRjUlGfTUsS/gcLlErrxpqcW7ToE0MlXlG08mI4QzgiQNq0m3KaN+TtjRxZ51GxTfsH0mZ2mckfoQNeQdpyI0/Yl6sF+Xdc=) 2026-03-16 00:22:46.693191 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLUOz1DXe4sd1Bknm/MPyzCLRFkBUycCL6mEM9zs1tzs9kFaw+hcegj+Jm3ZLterTbMkqD6abZbQNrPyzl4pc/Q=) 2026-03-16 00:22:46.693204 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPmZU+q+P70MzXkFp80bl/TWKmnUVRo5QbLdVQ9/wqvH) 2026-03-16 00:22:46.693216 | orchestrator | 2026-03-16 00:22:46.693229 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:46.693241 | orchestrator | Monday 16 March 2026 00:22:44 +0000 (0:00:01.029) 0:00:20.448 ********** 2026-03-16 00:22:46.693254 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN22Qt33zHnYAsK5QwmQu8LszKuJCG6pOZUIZdlzomS/WA6yHE4A7kQDxG8KYd8sCqg3SXyLq5Qfix3xeMHAYmo=) 2026-03-16 00:22:46.693267 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIENxJhNM03zZZAm2bwK65paJtl9WrnVnkUAaaLxnvZ5E) 2026-03-16 00:22:46.693281 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8TAN/yCczhCFmSfnXhX4eHCKHEImqazbP35NE8RPHAiVIaC4EU4Pea3VhAd7B6J3UnXNT4Dh8NryvqLHbM4vCp/xJNQ7gYA7asUBGQWhdQJOu9yLrhwOpZBduXbjOyq9aaoXIj/IWE0rP1z5AXzn4kuU4oYF/Vnj18RYAimUa77I7OjCYAs9tbAhhFRFO73zZbZ3gETZ/qn3W5YuX9I7T+aDyU4w3lLdYNVaohapRKRvFSXvNReD13IDt2f3d5sHavxFxdXpd4jeBvxt7vTziAPMyEbYmmoxFFjZ5+fewXWJy5WcrXOxeYiqgHc8cTyBCwlqpJ2jhexnvHmc97fnnwOLWdRQM1PCWZsHjAR2hFdXwl1xc+YpXFnY3A/ToOKugaTMaLdNZmlD5Y+nUUgA2MPKECuqajILE+2g4HGvvR+/kf6SY4T2AUehiQr5DXbB+pGIQqQpGSpukUKGsl6Vu3NV0I4sOt+0PbnH+6427rkKhHjfRSKYLcJxLqOcZdhU=) 2026-03-16 00:22:46.693293 | orchestrator | 2026-03-16 00:22:46.693305 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:46.693316 | orchestrator | Monday 16 March 2026 00:22:45 +0000 (0:00:01.030) 0:00:21.478 ********** 2026-03-16 00:22:46.693328 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBrPk1DbqGm7RYGzptP4XLQg1LzqHOlxF7mIv3fLgK7M) 2026-03-16 00:22:46.693352 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwOPasIHOTGj2gHCJBTgYr0kWEfLHmQ5SAWrx1+mEu1jrupkatxTzE3cHhZovZRMLM83WCS1yG5Qk8hAi4w77mWQUb3w/WEfpC1Eeo87r3HQP9Y09JTjlBxU4DcfTrt0ipeOl9Kq7KsuNjMoKbUA22XtGvvJLs8ltfB/44qUTyIKvUJeCUPI8LHD8Z5tIkjTFWpHxo9GT4ulpXDQCyHzwOsF+75kKjD4fesWp0WZQk4RKE7I9iHn3vdlVizkOc9oOCT+XIvyLsgJiU26V8/M1Epck3lP/OghMIrFUMimA9ngbzppCY1nMK5doNxlFqWd5Z/lPR/p+ptaetnSwAYH8G152aDHU297TYnWhb2oDAGdYA4t7+uBGnpHQ8ZhrvpS056ufl5SXp4GEbAHjGbtENJUjypmC9mz3vNycyYaHNZ1gzy1lhmVUtbPLflEEL1UF0tNFLZ/5qv5ZPL4QDOWwAPbgjkWtkf3vR22dMTGCSDX55GLic0CbdrXGFyuCuELk=) 2026-03-16 00:22:51.004833 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOQimT0gKNPhIXtlpCoohRpr+Ja31vK3xX3kIBAB9/Sm6D7gvPoawE30tOFdmw/9LKWsK0PNYmn8o7An8ImwPBo=) 2026-03-16 00:22:51.004952 | orchestrator | 2026-03-16 00:22:51.004968 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:51.004981 | orchestrator | Monday 16 March 2026 00:22:46 +0000 (0:00:01.017) 0:00:22.496 ********** 2026-03-16 00:22:51.004994 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5O3kvIXgU5iEiY8HfpReVAGiOASEToGQk2xayECXQTSwk/AeaJtQ5mJpUDK/TQRCUjHWmG8l5USchENFhEJ2pHw/6+EIrJE4BYV5BAq6YdtbmxOlP6Grm/HyMSF01Abg6M1Lpg0O9nloYEChXk3oAuTChXWVaPiM+5sVvWIRYrURxax8FcQWKDyVOQN02UT5wVWilvQaBsFBUlat+YbFXUb5GMr5mI8uthXp2kmlpYE5NmR8mR4Ea6VhH1lJDHChzIvd6iYONyXRuI0g3e5TDRrNYNLSO9kdPJ5EY8C9s3E7Pm8dyRgJzIP7Hn8478gNOzP6oCO2ja80H1dEujax5rPxUiiH5rQc5CZCDGWjtXHpucJrQA/eUdKZRBkmyOvQMu61rAMCvV2F2yDf7wPjSz3ehWO5lP9/5Dy0oEh4EXczJP0kz9Cy6HhM852nZzWBbFpv3F7YsD1JGVH7QStAhSM4IQrGmDplTtpUmXFzytV9gv/K7oJE4L1Kf9Sjo5NU=) 2026-03-16 00:22:51.005008 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJB1AC04Q9sJvGlL3yAqYRDemVX/XJC9fmQbEaDWs059) 2026-03-16 00:22:51.005020 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPVpE1ifHgwbuYDAqX8Lo3quJs7Pr2SdfcngdOUgIrqZ3nwjQy2upn0/OcVZieWwaQIJV19ht9mRdBsrHWTv1k4=) 2026-03-16 00:22:51.005032 | orchestrator | 2026-03-16 00:22:51.005043 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:51.005054 | orchestrator | Monday 16 March 2026 00:22:47 +0000 (0:00:01.018) 0:00:23.514 ********** 2026-03-16 00:22:51.005065 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuJvUfrzntVnGWF7tAP91KtP/KK2KDIYsEhlkPGkl0k+0mHgNtrDmuZeqhdBWnvyAPWsG2ZhT2MQeS0+p78SWN4tmNJGm3f/zWW75TVj6apkVXeYyE/zs7OPIy4OpcMaL6X2XIT34oLsFNdxS7ZLoemOBUkh2fKD1v695BYPJh9GQfRpJhOEGsFWUvZdvGmA0nv3oUid9iecphLXwVXAb7jKbDtTSTyDAgSV5GH+jS1XjCqhfnLcz1lgUZNq8J+6H+byLfJGxQ37t5EcAqtvCaquMU0Lr4d/RBqOvCs4S5br/fJ8MpoTTah1tCbqf3dL+2SksHRo8owv30ACLOHqFQWhrVbLhzt8sMoFYQDdUDB24jykqI0kSe0gRBKfMyf7x3+3gPmb8wiBbT5labOPuOml+cFvZ6wSvt71i/u18zVhAOBrSBanu1YIdj/cu5NReT4sGq3b4f1bUzHsnH1QqjKP63MowfLrbXbbDTagvibtCwCMPzsl+sxWUWRPhkHWU=) 2026-03-16 00:22:51.005076 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEj12plTFi37cEgFAjM63/IYyA+LeyS59NZ53FFerGmO2MpyqVEIOAHbA9Yjji3rhtYSa6Ji7tpXtWHVIJVLr/Y=) 2026-03-16 00:22:51.005088 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINqxLuXApRfjQYxLOk1ZGhlXcS6jO3K1dt0uvaGwRbi5) 2026-03-16 00:22:51.005099 | orchestrator | 2026-03-16 00:22:51.005109 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-16 00:22:51.005120 | orchestrator | Monday 16 March 2026 00:22:48 +0000 (0:00:01.058) 0:00:24.573 ********** 2026-03-16 00:22:51.005156 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCKqZd+p/YM1ZqgVgXKJsxOZo/+vhGGRD9btp9jWhjyUUq4NGmkfPXnzAjB6sf2jxTXiJbo20WIO7RbnBROrBeKiqmvOc9zAXgWJn7GpP/zUH/wurL3yTJ8COC+zYUxC3o8zSvRGvz35r40naMgiNNSFClkX1asbJNHV3HARN0MOGJuClJM+juFOerFCOSa1iaNc6B/M1AW4TXllyjsJlwog34AC5jLXddQRQtmFCDPyqAjdsO3LwTd9+SL9kZF6ibAN4YoGdRrSBtebobe73DTIjr8tSBmp9Hmep95iml6u92XBrO/RIYtPHIuJ/QCngZ/KLnmkzUFMDsa+VJgW2ux/PcOuQwigtJe6O5Ie/mOgkiD5wI0rDyZwGNZzGgN3wXW9rdYIB260ujXAgsFd4EITov+nWdP86Y3tXZ7w7pHr7n8jdcH9KfXH79/CP2Ls4VJA0/snnmBegP1vK3wg4Aoku8UOPiuLRpVIoPvdpVd7fnvMgRAdoEsY5kfshWoQXc=) 2026-03-16 00:22:51.005169 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP3w1cFP9cbQL51TltrKTwQF/gLtjK563CvFU79AtrhgHYxDtjb+sjUhdN6uNize4BmanOeOEgsIpF8YSx+HQSg=) 2026-03-16 00:22:51.005180 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIObixNO9JRa4d4RyRSKiejewgjEBbd3w3HCiRImKLD5x) 2026-03-16 00:22:51.005191 | orchestrator | 2026-03-16 00:22:51.005202 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-16 00:22:51.005220 | orchestrator | Monday 16 March 2026 00:22:49 +0000 (0:00:01.041) 0:00:25.615 ********** 2026-03-16 00:22:51.005231 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-16 00:22:51.005243 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-16 00:22:51.005254 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-16 00:22:51.005265 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-16 00:22:51.005292 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-16 00:22:51.005303 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-16 00:22:51.005314 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-16 00:22:51.005325 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:22:51.005337 | orchestrator | 2026-03-16 00:22:51.005348 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-16 00:22:51.005361 | orchestrator | Monday 16 March 2026 00:22:49 +0000 (0:00:00.152) 0:00:25.767 ********** 2026-03-16 00:22:51.005375 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:22:51.005387 | orchestrator | 2026-03-16 00:22:51.005400 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-16 00:22:51.005414 | orchestrator | Monday 16 March 2026 00:22:50 +0000 (0:00:00.056) 0:00:25.824 ********** 2026-03-16 00:22:51.005431 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:22:51.005445 | orchestrator | 2026-03-16 00:22:51.005458 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-16 00:22:51.005471 | orchestrator | Monday 16 March 2026 00:22:50 +0000 (0:00:00.053) 0:00:25.878 ********** 2026-03-16 00:22:51.005484 | orchestrator | changed: [testbed-manager] 2026-03-16 00:22:51.005495 | orchestrator | 2026-03-16 00:22:51.005506 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:22:51.005517 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-16 00:22:51.005530 | orchestrator | 2026-03-16 00:22:51.005540 | orchestrator | 2026-03-16 00:22:51.005551 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:22:51.005562 | orchestrator | Monday 16 March 2026 00:22:50 +0000 (0:00:00.744) 0:00:26.622 ********** 2026-03-16 00:22:51.005573 | orchestrator | =============================================================================== 2026-03-16 00:22:51.005584 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.69s 2026-03-16 00:22:51.005595 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.22s 2026-03-16 00:22:51.005606 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-16 00:22:51.005617 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-16 00:22:51.005628 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-16 00:22:51.005638 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-16 00:22:51.005649 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-16 00:22:51.005660 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-16 00:22:51.005671 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-16 00:22:51.005682 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-16 00:22:51.005693 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-16 00:22:51.005703 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-16 00:22:51.005714 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-16 00:22:51.005725 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-16 00:22:51.005743 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-16 00:22:51.005753 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-03-16 00:22:51.005764 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.74s 2026-03-16 00:22:51.005775 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-16 00:22:51.005786 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-03-16 00:22:51.005819 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.14s 2026-03-16 00:22:51.317426 | orchestrator | + osism apply squid 2026-03-16 00:23:03.252420 | orchestrator | 2026-03-16 00:23:03 | INFO  | Task 608c1a90-07b1-4398-8383-1a2abafa621d (squid) was prepared for execution. 2026-03-16 00:23:03.252500 | orchestrator | 2026-03-16 00:23:03 | INFO  | It takes a moment until task 608c1a90-07b1-4398-8383-1a2abafa621d (squid) has been started and output is visible here. 2026-03-16 00:24:56.610894 | orchestrator | 2026-03-16 00:24:56.611033 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-16 00:24:56.611051 | orchestrator | 2026-03-16 00:24:56.611063 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-16 00:24:56.611074 | orchestrator | Monday 16 March 2026 00:23:07 +0000 (0:00:00.117) 0:00:00.117 ********** 2026-03-16 00:24:56.611086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-16 00:24:56.611098 | orchestrator | 2026-03-16 00:24:56.611110 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-16 00:24:56.611121 | orchestrator | Monday 16 March 2026 00:23:07 +0000 (0:00:00.068) 0:00:00.186 ********** 2026-03-16 00:24:56.611132 | orchestrator | ok: [testbed-manager] 2026-03-16 00:24:56.611144 | orchestrator | 2026-03-16 00:24:56.611155 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-16 00:24:56.611166 | orchestrator | Monday 16 March 2026 00:23:08 +0000 (0:00:01.070) 0:00:01.256 ********** 2026-03-16 00:24:56.611178 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-16 00:24:56.611189 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-16 00:24:56.611200 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-16 00:24:56.611211 | orchestrator | 2026-03-16 00:24:56.611222 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-16 00:24:56.611233 | orchestrator | Monday 16 March 2026 00:23:09 +0000 (0:00:00.980) 0:00:02.236 ********** 2026-03-16 00:24:56.611244 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-16 00:24:56.611255 | orchestrator | 2026-03-16 00:24:56.611266 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-16 00:24:56.611277 | orchestrator | Monday 16 March 2026 00:23:10 +0000 (0:00:00.946) 0:00:03.183 ********** 2026-03-16 00:24:56.611288 | orchestrator | ok: [testbed-manager] 2026-03-16 00:24:56.611299 | orchestrator | 2026-03-16 00:24:56.611310 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-16 00:24:56.611321 | orchestrator | Monday 16 March 2026 00:23:10 +0000 (0:00:00.304) 0:00:03.488 ********** 2026-03-16 00:24:56.611332 | orchestrator | changed: [testbed-manager] 2026-03-16 00:24:56.611344 | orchestrator | 2026-03-16 00:24:56.611355 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-16 00:24:56.611366 | orchestrator | Monday 16 March 2026 00:23:11 +0000 (0:00:00.793) 0:00:04.281 ********** 2026-03-16 00:24:56.611377 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-16 00:24:56.611394 | orchestrator | ok: [testbed-manager] 2026-03-16 00:24:56.611405 | orchestrator | 2026-03-16 00:24:56.611416 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-16 00:24:56.611455 | orchestrator | Monday 16 March 2026 00:23:42 +0000 (0:00:31.276) 0:00:35.557 ********** 2026-03-16 00:24:56.611469 | orchestrator | changed: [testbed-manager] 2026-03-16 00:24:56.611481 | orchestrator | 2026-03-16 00:24:56.611494 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-16 00:24:56.611507 | orchestrator | Monday 16 March 2026 00:23:55 +0000 (0:00:12.978) 0:00:48.536 ********** 2026-03-16 00:24:56.611520 | orchestrator | Pausing for 60 seconds 2026-03-16 00:24:56.611532 | orchestrator | changed: [testbed-manager] 2026-03-16 00:24:56.611545 | orchestrator | 2026-03-16 00:24:56.611558 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-16 00:24:56.611570 | orchestrator | Monday 16 March 2026 00:24:55 +0000 (0:01:00.078) 0:01:48.614 ********** 2026-03-16 00:24:56.611584 | orchestrator | ok: [testbed-manager] 2026-03-16 00:24:56.611632 | orchestrator | 2026-03-16 00:24:56.611645 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-16 00:24:56.611658 | orchestrator | Monday 16 March 2026 00:24:55 +0000 (0:00:00.064) 0:01:48.678 ********** 2026-03-16 00:24:56.611670 | orchestrator | changed: [testbed-manager] 2026-03-16 00:24:56.611682 | orchestrator | 2026-03-16 00:24:56.611694 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:24:56.611707 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:24:56.611719 | orchestrator | 2026-03-16 00:24:56.611732 | orchestrator | 2026-03-16 00:24:56.611744 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:24:56.611757 | orchestrator | Monday 16 March 2026 00:24:56 +0000 (0:00:00.583) 0:01:49.262 ********** 2026-03-16 00:24:56.611769 | orchestrator | =============================================================================== 2026-03-16 00:24:56.611801 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-03-16 00:24:56.611813 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.28s 2026-03-16 00:24:56.611824 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.98s 2026-03-16 00:24:56.611834 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.07s 2026-03-16 00:24:56.611845 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.98s 2026-03-16 00:24:56.611856 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.95s 2026-03-16 00:24:56.611866 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.79s 2026-03-16 00:24:56.611877 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.58s 2026-03-16 00:24:56.611888 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.30s 2026-03-16 00:24:56.611898 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-03-16 00:24:56.611909 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-03-16 00:24:56.884588 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-16 00:24:56.884745 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-16 00:24:56.933916 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-16 00:24:56.934178 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-16 00:24:56.937856 | orchestrator | + set -e 2026-03-16 00:24:56.937909 | orchestrator | + NAMESPACE=kolla/release 2026-03-16 00:24:56.937923 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-16 00:24:56.944517 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-16 00:24:57.014069 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-16 00:24:57.014149 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-16 00:25:09.034749 | orchestrator | 2026-03-16 00:25:09 | INFO  | Task 0f9b1620-25c3-4c1f-849e-9bacbc11ebc2 (operator) was prepared for execution. 2026-03-16 00:25:09.034845 | orchestrator | 2026-03-16 00:25:09 | INFO  | It takes a moment until task 0f9b1620-25c3-4c1f-849e-9bacbc11ebc2 (operator) has been started and output is visible here. 2026-03-16 00:25:25.085929 | orchestrator | 2026-03-16 00:25:25.086093 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-16 00:25:25.086113 | orchestrator | 2026-03-16 00:25:25.086125 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-16 00:25:25.086137 | orchestrator | Monday 16 March 2026 00:25:13 +0000 (0:00:00.141) 0:00:00.141 ********** 2026-03-16 00:25:25.086149 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:25:25.086162 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:25:25.086172 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:25:25.086183 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:25:25.086194 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:25:25.086205 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:25:25.086216 | orchestrator | 2026-03-16 00:25:25.086228 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-16 00:25:25.086239 | orchestrator | Monday 16 March 2026 00:25:16 +0000 (0:00:03.413) 0:00:03.555 ********** 2026-03-16 00:25:25.086249 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:25:25.086260 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:25:25.086271 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:25:25.086298 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:25:25.086310 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:25:25.086321 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:25:25.086332 | orchestrator | 2026-03-16 00:25:25.086343 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-16 00:25:25.086354 | orchestrator | 2026-03-16 00:25:25.086365 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-16 00:25:25.086376 | orchestrator | Monday 16 March 2026 00:25:17 +0000 (0:00:00.854) 0:00:04.409 ********** 2026-03-16 00:25:25.086387 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:25:25.086398 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:25:25.086408 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:25:25.086419 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:25:25.086430 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:25:25.086442 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:25:25.086453 | orchestrator | 2026-03-16 00:25:25.086465 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-16 00:25:25.086478 | orchestrator | Monday 16 March 2026 00:25:17 +0000 (0:00:00.156) 0:00:04.565 ********** 2026-03-16 00:25:25.086491 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:25:25.086503 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:25:25.086523 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:25:25.086543 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:25:25.086588 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:25:25.086603 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:25:25.086616 | orchestrator | 2026-03-16 00:25:25.086629 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-16 00:25:25.086643 | orchestrator | Monday 16 March 2026 00:25:17 +0000 (0:00:00.159) 0:00:04.725 ********** 2026-03-16 00:25:25.086656 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:25:25.086669 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:25:25.086682 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:25:25.086694 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:25:25.086708 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:25:25.086721 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:25:25.086733 | orchestrator | 2026-03-16 00:25:25.086746 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-16 00:25:25.086759 | orchestrator | Monday 16 March 2026 00:25:18 +0000 (0:00:00.642) 0:00:05.367 ********** 2026-03-16 00:25:25.086771 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:25:25.086789 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:25:25.086808 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:25:25.086827 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:25:25.086844 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:25:25.086863 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:25:25.086903 | orchestrator | 2026-03-16 00:25:25.086916 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-16 00:25:25.086927 | orchestrator | Monday 16 March 2026 00:25:19 +0000 (0:00:00.797) 0:00:06.165 ********** 2026-03-16 00:25:25.086938 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-16 00:25:25.086949 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-16 00:25:25.086960 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-16 00:25:25.086970 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-16 00:25:25.086981 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-16 00:25:25.086992 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-16 00:25:25.087007 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-16 00:25:25.087025 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-16 00:25:25.087044 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-16 00:25:25.087063 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-16 00:25:25.087081 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-16 00:25:25.087100 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-16 00:25:25.087118 | orchestrator | 2026-03-16 00:25:25.087137 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-16 00:25:25.087156 | orchestrator | Monday 16 March 2026 00:25:20 +0000 (0:00:01.260) 0:00:07.426 ********** 2026-03-16 00:25:25.087168 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:25:25.087178 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:25:25.087189 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:25:25.087200 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:25:25.087211 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:25:25.087222 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:25:25.087233 | orchestrator | 2026-03-16 00:25:25.087243 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-16 00:25:25.087255 | orchestrator | Monday 16 March 2026 00:25:21 +0000 (0:00:01.185) 0:00:08.612 ********** 2026-03-16 00:25:25.087266 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-16 00:25:25.087277 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-16 00:25:25.087288 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-16 00:25:25.087299 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-16 00:25:25.087330 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-16 00:25:25.087341 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-16 00:25:25.087352 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-16 00:25:25.087363 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-16 00:25:25.087374 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-16 00:25:25.087384 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-16 00:25:25.087395 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-16 00:25:25.087406 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-16 00:25:25.087416 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-16 00:25:25.087427 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-16 00:25:25.087438 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-16 00:25:25.087448 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-16 00:25:25.087459 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-16 00:25:25.087470 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-16 00:25:25.087481 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-16 00:25:25.087492 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-16 00:25:25.087513 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-16 00:25:25.087524 | orchestrator | 2026-03-16 00:25:25.087535 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-16 00:25:25.087546 | orchestrator | Monday 16 March 2026 00:25:22 +0000 (0:00:01.264) 0:00:09.876 ********** 2026-03-16 00:25:25.087593 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:25:25.087606 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:25:25.087617 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:25:25.087628 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:25:25.087639 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:25:25.087650 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:25:25.087661 | orchestrator | 2026-03-16 00:25:25.087672 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-16 00:25:25.087683 | orchestrator | Monday 16 March 2026 00:25:22 +0000 (0:00:00.161) 0:00:10.037 ********** 2026-03-16 00:25:25.087694 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:25:25.087704 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:25:25.087715 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:25:25.087726 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:25:25.087736 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:25:25.087747 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:25:25.087758 | orchestrator | 2026-03-16 00:25:25.087769 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-16 00:25:25.087780 | orchestrator | Monday 16 March 2026 00:25:23 +0000 (0:00:00.176) 0:00:10.214 ********** 2026-03-16 00:25:25.087791 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:25:25.087802 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:25:25.087812 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:25:25.087823 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:25:25.087833 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:25:25.087844 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:25:25.087855 | orchestrator | 2026-03-16 00:25:25.087866 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-16 00:25:25.087877 | orchestrator | Monday 16 March 2026 00:25:23 +0000 (0:00:00.725) 0:00:10.939 ********** 2026-03-16 00:25:25.087888 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:25:25.087898 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:25:25.087909 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:25:25.087920 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:25:25.087941 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:25:25.087952 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:25:25.087963 | orchestrator | 2026-03-16 00:25:25.087974 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-16 00:25:25.087985 | orchestrator | Monday 16 March 2026 00:25:24 +0000 (0:00:00.192) 0:00:11.131 ********** 2026-03-16 00:25:25.087996 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-16 00:25:25.088007 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-16 00:25:25.088018 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:25:25.088028 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:25:25.088039 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-16 00:25:25.088050 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:25:25.088060 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-16 00:25:25.088071 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:25:25.088082 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-16 00:25:25.088093 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:25:25.088103 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-16 00:25:25.088114 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:25:25.088125 | orchestrator | 2026-03-16 00:25:25.088136 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-16 00:25:25.088147 | orchestrator | Monday 16 March 2026 00:25:24 +0000 (0:00:00.716) 0:00:11.848 ********** 2026-03-16 00:25:25.088165 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:25:25.088176 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:25:25.088186 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:25:25.088197 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:25:25.088208 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:25:25.088218 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:25:25.088230 | orchestrator | 2026-03-16 00:25:25.088250 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-16 00:25:25.088270 | orchestrator | Monday 16 March 2026 00:25:24 +0000 (0:00:00.151) 0:00:11.999 ********** 2026-03-16 00:25:25.088288 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:25:25.088308 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:25:25.088326 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:25:25.088344 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:25:25.088365 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:25:26.415275 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:25:26.415380 | orchestrator | 2026-03-16 00:25:26.415395 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-16 00:25:26.415408 | orchestrator | Monday 16 March 2026 00:25:25 +0000 (0:00:00.152) 0:00:12.152 ********** 2026-03-16 00:25:26.415419 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:25:26.415431 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:25:26.415442 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:25:26.415453 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:25:26.415463 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:25:26.415474 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:25:26.415485 | orchestrator | 2026-03-16 00:25:26.415496 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-16 00:25:26.415507 | orchestrator | Monday 16 March 2026 00:25:25 +0000 (0:00:00.170) 0:00:12.323 ********** 2026-03-16 00:25:26.415518 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:25:26.415529 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:25:26.415620 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:25:26.415636 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:25:26.415647 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:25:26.415657 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:25:26.415668 | orchestrator | 2026-03-16 00:25:26.415678 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-16 00:25:26.415689 | orchestrator | Monday 16 March 2026 00:25:25 +0000 (0:00:00.662) 0:00:12.986 ********** 2026-03-16 00:25:26.415700 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:25:26.415711 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:25:26.415722 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:25:26.415733 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:25:26.415743 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:25:26.415754 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:25:26.415764 | orchestrator | 2026-03-16 00:25:26.415775 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:25:26.415787 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 00:25:26.415800 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 00:25:26.415812 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 00:25:26.415824 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 00:25:26.415836 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 00:25:26.415875 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 00:25:26.415888 | orchestrator | 2026-03-16 00:25:26.415900 | orchestrator | 2026-03-16 00:25:26.415919 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:25:26.415937 | orchestrator | Monday 16 March 2026 00:25:26 +0000 (0:00:00.246) 0:00:13.233 ********** 2026-03-16 00:25:26.415957 | orchestrator | =============================================================================== 2026-03-16 00:25:26.415976 | orchestrator | Gathering Facts --------------------------------------------------------- 3.41s 2026-03-16 00:25:26.415994 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2026-03-16 00:25:26.416013 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.26s 2026-03-16 00:25:26.416032 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.19s 2026-03-16 00:25:26.416049 | orchestrator | Do not require tty for all users ---------------------------------------- 0.85s 2026-03-16 00:25:26.416067 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2026-03-16 00:25:26.416086 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.73s 2026-03-16 00:25:26.416106 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2026-03-16 00:25:26.416125 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-03-16 00:25:26.416145 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2026-03-16 00:25:26.416164 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-03-16 00:25:26.416179 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2026-03-16 00:25:26.416190 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-03-16 00:25:26.416201 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-03-16 00:25:26.416212 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-03-16 00:25:26.416223 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-03-16 00:25:26.416233 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-03-16 00:25:26.416244 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-03-16 00:25:26.416255 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-03-16 00:25:26.712635 | orchestrator | + osism apply --environment custom facts 2026-03-16 00:25:28.644520 | orchestrator | 2026-03-16 00:25:28 | INFO  | Trying to run play facts in environment custom 2026-03-16 00:25:38.762488 | orchestrator | 2026-03-16 00:25:38 | INFO  | Task c38190b7-4c31-4fde-81d1-d39668744fcd (facts) was prepared for execution. 2026-03-16 00:25:38.762674 | orchestrator | 2026-03-16 00:25:38 | INFO  | It takes a moment until task c38190b7-4c31-4fde-81d1-d39668744fcd (facts) has been started and output is visible here. 2026-03-16 00:26:24.483583 | orchestrator | 2026-03-16 00:26:24.483717 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-16 00:26:24.483747 | orchestrator | 2026-03-16 00:26:24.483760 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-16 00:26:24.483773 | orchestrator | Monday 16 March 2026 00:25:42 +0000 (0:00:00.082) 0:00:00.082 ********** 2026-03-16 00:26:24.483786 | orchestrator | ok: [testbed-manager] 2026-03-16 00:26:24.483799 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:26:24.483811 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:26:24.483822 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:26:24.483833 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:26:24.483845 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:26:24.483881 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:26:24.483893 | orchestrator | 2026-03-16 00:26:24.483905 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-16 00:26:24.483917 | orchestrator | Monday 16 March 2026 00:25:44 +0000 (0:00:01.461) 0:00:01.544 ********** 2026-03-16 00:26:24.483929 | orchestrator | ok: [testbed-manager] 2026-03-16 00:26:24.483940 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:26:24.483952 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:26:24.483963 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:26:24.483975 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:26:24.483986 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:26:24.483997 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:26:24.484008 | orchestrator | 2026-03-16 00:26:24.484019 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-16 00:26:24.484031 | orchestrator | 2026-03-16 00:26:24.484042 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-16 00:26:24.484054 | orchestrator | Monday 16 March 2026 00:25:45 +0000 (0:00:01.125) 0:00:02.670 ********** 2026-03-16 00:26:24.484067 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:24.484078 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:24.484089 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:24.484101 | orchestrator | 2026-03-16 00:26:24.484113 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-16 00:26:24.484125 | orchestrator | Monday 16 March 2026 00:25:45 +0000 (0:00:00.079) 0:00:02.750 ********** 2026-03-16 00:26:24.484135 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:24.484145 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:24.484155 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:24.484166 | orchestrator | 2026-03-16 00:26:24.484177 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-16 00:26:24.484188 | orchestrator | Monday 16 March 2026 00:25:45 +0000 (0:00:00.174) 0:00:02.924 ********** 2026-03-16 00:26:24.484200 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:24.484212 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:24.484223 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:24.484234 | orchestrator | 2026-03-16 00:26:24.484245 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-16 00:26:24.484258 | orchestrator | Monday 16 March 2026 00:25:45 +0000 (0:00:00.180) 0:00:03.105 ********** 2026-03-16 00:26:24.484271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:26:24.484284 | orchestrator | 2026-03-16 00:26:24.484295 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-16 00:26:24.484307 | orchestrator | Monday 16 March 2026 00:25:45 +0000 (0:00:00.101) 0:00:03.207 ********** 2026-03-16 00:26:24.484318 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:24.484328 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:24.484337 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:24.484348 | orchestrator | 2026-03-16 00:26:24.484359 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-16 00:26:24.484370 | orchestrator | Monday 16 March 2026 00:25:46 +0000 (0:00:00.421) 0:00:03.629 ********** 2026-03-16 00:26:24.484380 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:26:24.484390 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:26:24.484402 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:26:24.484413 | orchestrator | 2026-03-16 00:26:24.484425 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-16 00:26:24.484435 | orchestrator | Monday 16 March 2026 00:25:46 +0000 (0:00:00.107) 0:00:03.737 ********** 2026-03-16 00:26:24.484446 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:26:24.484457 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:26:24.484468 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:26:24.484479 | orchestrator | 2026-03-16 00:26:24.484510 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-16 00:26:24.484535 | orchestrator | Monday 16 March 2026 00:25:47 +0000 (0:00:01.033) 0:00:04.770 ********** 2026-03-16 00:26:24.484547 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:24.484559 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:24.484569 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:24.484581 | orchestrator | 2026-03-16 00:26:24.484591 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-16 00:26:24.484655 | orchestrator | Monday 16 March 2026 00:25:47 +0000 (0:00:00.520) 0:00:05.290 ********** 2026-03-16 00:26:24.484670 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:26:24.484682 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:26:24.484693 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:26:24.484703 | orchestrator | 2026-03-16 00:26:24.484714 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-16 00:26:24.484725 | orchestrator | Monday 16 March 2026 00:25:49 +0000 (0:00:01.088) 0:00:06.379 ********** 2026-03-16 00:26:24.484737 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:26:24.484748 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:26:24.484760 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:26:24.484771 | orchestrator | 2026-03-16 00:26:24.484782 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-16 00:26:24.484794 | orchestrator | Monday 16 March 2026 00:26:06 +0000 (0:00:17.242) 0:00:23.621 ********** 2026-03-16 00:26:24.484806 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:26:24.484817 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:26:24.484830 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:26:24.484841 | orchestrator | 2026-03-16 00:26:24.484852 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-16 00:26:24.484884 | orchestrator | Monday 16 March 2026 00:26:06 +0000 (0:00:00.080) 0:00:23.702 ********** 2026-03-16 00:26:24.484897 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:26:24.484908 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:26:24.484920 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:26:24.484931 | orchestrator | 2026-03-16 00:26:24.484940 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-16 00:26:24.484957 | orchestrator | Monday 16 March 2026 00:26:14 +0000 (0:00:08.462) 0:00:32.165 ********** 2026-03-16 00:26:24.484969 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:24.484981 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:24.484990 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:24.485000 | orchestrator | 2026-03-16 00:26:24.485010 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-16 00:26:24.485019 | orchestrator | Monday 16 March 2026 00:26:15 +0000 (0:00:00.477) 0:00:32.642 ********** 2026-03-16 00:26:24.485030 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-16 00:26:24.485041 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-16 00:26:24.485050 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-16 00:26:24.485059 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-16 00:26:24.485071 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-16 00:26:24.485081 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-16 00:26:24.485093 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-16 00:26:24.485103 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-16 00:26:24.485114 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-16 00:26:24.485125 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-16 00:26:24.485137 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-16 00:26:24.485147 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-16 00:26:24.485158 | orchestrator | 2026-03-16 00:26:24.485168 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-16 00:26:24.485188 | orchestrator | Monday 16 March 2026 00:26:19 +0000 (0:00:03.844) 0:00:36.487 ********** 2026-03-16 00:26:24.485198 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:24.485208 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:24.485218 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:24.485229 | orchestrator | 2026-03-16 00:26:24.485239 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-16 00:26:24.485250 | orchestrator | 2026-03-16 00:26:24.485261 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-16 00:26:24.485271 | orchestrator | Monday 16 March 2026 00:26:20 +0000 (0:00:01.586) 0:00:38.074 ********** 2026-03-16 00:26:24.485282 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:26:24.485293 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:26:24.485304 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:26:24.485318 | orchestrator | ok: [testbed-manager] 2026-03-16 00:26:24.485332 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:24.485344 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:24.485354 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:24.485364 | orchestrator | 2026-03-16 00:26:24.485376 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:26:24.485388 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:26:24.485400 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:26:24.485413 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:26:24.485423 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:26:24.485434 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:26:24.485445 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:26:24.485456 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:26:24.485466 | orchestrator | 2026-03-16 00:26:24.485478 | orchestrator | 2026-03-16 00:26:24.485516 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:26:24.485528 | orchestrator | Monday 16 March 2026 00:26:24 +0000 (0:00:03.685) 0:00:41.759 ********** 2026-03-16 00:26:24.485540 | orchestrator | =============================================================================== 2026-03-16 00:26:24.485552 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.24s 2026-03-16 00:26:24.485564 | orchestrator | Install required packages (Debian) -------------------------------------- 8.46s 2026-03-16 00:26:24.485576 | orchestrator | Copy fact files --------------------------------------------------------- 3.84s 2026-03-16 00:26:24.485587 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.69s 2026-03-16 00:26:24.485598 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.59s 2026-03-16 00:26:24.485610 | orchestrator | Create custom facts directory ------------------------------------------- 1.46s 2026-03-16 00:26:24.485636 | orchestrator | Copy fact file ---------------------------------------------------------- 1.13s 2026-03-16 00:26:24.726569 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-03-16 00:26:24.726669 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-03-16 00:26:24.726705 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.52s 2026-03-16 00:26:24.726717 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-03-16 00:26:24.726750 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-03-16 00:26:24.726762 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2026-03-16 00:26:24.726772 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2026-03-16 00:26:24.726782 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-03-16 00:26:24.726792 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.10s 2026-03-16 00:26:24.726804 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-03-16 00:26:24.726814 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2026-03-16 00:26:25.097811 | orchestrator | + osism apply bootstrap 2026-03-16 00:26:37.183672 | orchestrator | 2026-03-16 00:26:37 | INFO  | Task 114e300d-4292-4a05-aaab-338f5dc95a45 (bootstrap) was prepared for execution. 2026-03-16 00:26:37.183788 | orchestrator | 2026-03-16 00:26:37 | INFO  | It takes a moment until task 114e300d-4292-4a05-aaab-338f5dc95a45 (bootstrap) has been started and output is visible here. 2026-03-16 00:26:53.540443 | orchestrator | 2026-03-16 00:26:53.540577 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-16 00:26:53.540586 | orchestrator | 2026-03-16 00:26:53.540590 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-16 00:26:53.540595 | orchestrator | Monday 16 March 2026 00:26:41 +0000 (0:00:00.187) 0:00:00.187 ********** 2026-03-16 00:26:53.540600 | orchestrator | ok: [testbed-manager] 2026-03-16 00:26:53.540605 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:53.540609 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:53.540613 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:53.540617 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:26:53.540631 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:26:53.540635 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:26:53.540639 | orchestrator | 2026-03-16 00:26:53.540644 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-16 00:26:53.540648 | orchestrator | 2026-03-16 00:26:53.540651 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-16 00:26:53.540655 | orchestrator | Monday 16 March 2026 00:26:41 +0000 (0:00:00.282) 0:00:00.469 ********** 2026-03-16 00:26:53.540659 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:26:53.540663 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:26:53.540667 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:26:53.540671 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:53.540675 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:53.540679 | orchestrator | ok: [testbed-manager] 2026-03-16 00:26:53.540682 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:53.540686 | orchestrator | 2026-03-16 00:26:53.540690 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-16 00:26:53.540694 | orchestrator | 2026-03-16 00:26:53.540698 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-16 00:26:53.540701 | orchestrator | Monday 16 March 2026 00:26:45 +0000 (0:00:03.739) 0:00:04.208 ********** 2026-03-16 00:26:53.540706 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-16 00:26:53.540710 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-16 00:26:53.540714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-16 00:26:53.540718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:26:53.540722 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-16 00:26:53.540726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:26:53.540730 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-16 00:26:53.540734 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-16 00:26:53.540737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:26:53.540756 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-16 00:26:53.540760 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-16 00:26:53.540764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-16 00:26:53.540768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-16 00:26:53.540772 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-16 00:26:53.540776 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-16 00:26:53.540780 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-16 00:26:53.540784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-16 00:26:53.540787 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-16 00:26:53.540791 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-16 00:26:53.540795 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-16 00:26:53.540799 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-16 00:26:53.540803 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-16 00:26:53.540806 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:26:53.540810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-16 00:26:53.540814 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:26:53.540818 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-16 00:26:53.540822 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-16 00:26:53.540825 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-16 00:26:53.540829 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:26:53.540833 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-16 00:26:53.540837 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-16 00:26:53.540841 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-16 00:26:53.540845 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-16 00:26:53.540849 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-16 00:26:53.540853 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-16 00:26:53.540857 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-16 00:26:53.540861 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-16 00:26:53.540864 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-16 00:26:53.540868 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-16 00:26:53.540872 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-16 00:26:53.540876 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-16 00:26:53.540879 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:26:53.540883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-16 00:26:53.540887 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-16 00:26:53.540891 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-16 00:26:53.540894 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-16 00:26:53.540898 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:26:53.540911 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-16 00:26:53.540915 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-16 00:26:53.540919 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-16 00:26:53.540934 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:26:53.540938 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-16 00:26:53.540942 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-16 00:26:53.540946 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-16 00:26:53.540956 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-16 00:26:53.540962 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:26:53.540967 | orchestrator | 2026-03-16 00:26:53.540973 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-16 00:26:53.540980 | orchestrator | 2026-03-16 00:26:53.540985 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-16 00:26:53.540992 | orchestrator | Monday 16 March 2026 00:26:46 +0000 (0:00:00.488) 0:00:04.697 ********** 2026-03-16 00:26:53.540997 | orchestrator | ok: [testbed-manager] 2026-03-16 00:26:53.541001 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:26:53.541004 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:53.541008 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:26:53.541012 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:53.541016 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:53.541020 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:26:53.541024 | orchestrator | 2026-03-16 00:26:53.541029 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-16 00:26:53.541033 | orchestrator | Monday 16 March 2026 00:26:47 +0000 (0:00:01.240) 0:00:05.937 ********** 2026-03-16 00:26:53.541037 | orchestrator | ok: [testbed-manager] 2026-03-16 00:26:53.541042 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:26:53.541048 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:26:53.541054 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:26:53.541061 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:26:53.541067 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:26:53.541074 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:26:53.541080 | orchestrator | 2026-03-16 00:26:53.541086 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-16 00:26:53.541092 | orchestrator | Monday 16 March 2026 00:26:48 +0000 (0:00:01.278) 0:00:07.215 ********** 2026-03-16 00:26:53.541099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:26:53.541108 | orchestrator | 2026-03-16 00:26:53.541114 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-16 00:26:53.541120 | orchestrator | Monday 16 March 2026 00:26:48 +0000 (0:00:00.294) 0:00:07.510 ********** 2026-03-16 00:26:53.541127 | orchestrator | changed: [testbed-manager] 2026-03-16 00:26:53.541133 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:26:53.541139 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:26:53.541157 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:26:53.541164 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:26:53.541170 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:26:53.541176 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:26:53.541182 | orchestrator | 2026-03-16 00:26:53.541192 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-16 00:26:53.541200 | orchestrator | Monday 16 March 2026 00:26:50 +0000 (0:00:02.052) 0:00:09.563 ********** 2026-03-16 00:26:53.541207 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:26:53.541215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:26:53.541224 | orchestrator | 2026-03-16 00:26:53.541230 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-16 00:26:53.541236 | orchestrator | Monday 16 March 2026 00:26:51 +0000 (0:00:00.287) 0:00:09.851 ********** 2026-03-16 00:26:53.541242 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:26:53.541248 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:26:53.541254 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:26:53.541260 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:26:53.541267 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:26:53.541274 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:26:53.541288 | orchestrator | 2026-03-16 00:26:53.541296 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-16 00:26:53.541300 | orchestrator | Monday 16 March 2026 00:26:52 +0000 (0:00:01.104) 0:00:10.956 ********** 2026-03-16 00:26:53.541304 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:26:53.541308 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:26:53.541311 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:26:53.541315 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:26:53.541318 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:26:53.541322 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:26:53.541328 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:26:53.541334 | orchestrator | 2026-03-16 00:26:53.541340 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-16 00:26:53.541347 | orchestrator | Monday 16 March 2026 00:26:52 +0000 (0:00:00.606) 0:00:11.563 ********** 2026-03-16 00:26:53.541353 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:26:53.541359 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:26:53.541365 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:26:53.541371 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:26:53.541378 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:26:53.541382 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:26:53.541386 | orchestrator | ok: [testbed-manager] 2026-03-16 00:26:53.541390 | orchestrator | 2026-03-16 00:26:53.541393 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-16 00:26:53.541398 | orchestrator | Monday 16 March 2026 00:26:53 +0000 (0:00:00.412) 0:00:11.975 ********** 2026-03-16 00:26:53.541402 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:26:53.541405 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:26:53.541414 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:27:06.835940 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:27:06.836055 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:27:06.836071 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:27:06.836083 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:27:06.836094 | orchestrator | 2026-03-16 00:27:06.836107 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-16 00:27:06.836119 | orchestrator | Monday 16 March 2026 00:26:53 +0000 (0:00:00.256) 0:00:12.232 ********** 2026-03-16 00:27:06.836132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:27:06.836159 | orchestrator | 2026-03-16 00:27:06.836171 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-16 00:27:06.836183 | orchestrator | Monday 16 March 2026 00:26:53 +0000 (0:00:00.351) 0:00:12.584 ********** 2026-03-16 00:27:06.836194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:27:06.836205 | orchestrator | 2026-03-16 00:27:06.836216 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-16 00:27:06.836227 | orchestrator | Monday 16 March 2026 00:26:54 +0000 (0:00:00.286) 0:00:12.870 ********** 2026-03-16 00:27:06.836239 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.836251 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:06.836261 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:06.836272 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:06.836283 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:06.836294 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:06.836306 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:06.836326 | orchestrator | 2026-03-16 00:27:06.836344 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-16 00:27:06.836361 | orchestrator | Monday 16 March 2026 00:26:56 +0000 (0:00:01.732) 0:00:14.602 ********** 2026-03-16 00:27:06.836415 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:27:06.836435 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:27:06.836484 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:27:06.836498 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:27:06.836511 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:27:06.836523 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:27:06.836535 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:27:06.836546 | orchestrator | 2026-03-16 00:27:06.836557 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-16 00:27:06.836568 | orchestrator | Monday 16 March 2026 00:26:56 +0000 (0:00:00.241) 0:00:14.844 ********** 2026-03-16 00:27:06.836579 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:06.836589 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:06.836600 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.836611 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:06.836622 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:06.836632 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:06.836643 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:06.836653 | orchestrator | 2026-03-16 00:27:06.836664 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-16 00:27:06.836675 | orchestrator | Monday 16 March 2026 00:26:56 +0000 (0:00:00.574) 0:00:15.418 ********** 2026-03-16 00:27:06.836686 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:27:06.836697 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:27:06.836708 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:27:06.836719 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:27:06.836730 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:27:06.836740 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:27:06.836752 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:27:06.836763 | orchestrator | 2026-03-16 00:27:06.836774 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-16 00:27:06.836786 | orchestrator | Monday 16 March 2026 00:26:57 +0000 (0:00:00.266) 0:00:15.685 ********** 2026-03-16 00:27:06.836797 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.836807 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:27:06.836818 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:27:06.836829 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:27:06.836839 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:27:06.836850 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:27:06.836871 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:27:06.836882 | orchestrator | 2026-03-16 00:27:06.836893 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-16 00:27:06.836904 | orchestrator | Monday 16 March 2026 00:26:57 +0000 (0:00:00.746) 0:00:16.432 ********** 2026-03-16 00:27:06.836915 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.836926 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:27:06.836936 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:27:06.836947 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:27:06.836958 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:27:06.836968 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:27:06.836979 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:27:06.836990 | orchestrator | 2026-03-16 00:27:06.837001 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-16 00:27:06.837012 | orchestrator | Monday 16 March 2026 00:26:59 +0000 (0:00:01.189) 0:00:17.622 ********** 2026-03-16 00:27:06.837023 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.837033 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:06.837044 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:06.837055 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:06.837066 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:06.837076 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:06.837087 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:06.837098 | orchestrator | 2026-03-16 00:27:06.837109 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-16 00:27:06.837129 | orchestrator | Monday 16 March 2026 00:27:00 +0000 (0:00:01.099) 0:00:18.721 ********** 2026-03-16 00:27:06.837159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:27:06.837171 | orchestrator | 2026-03-16 00:27:06.837182 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-16 00:27:06.837192 | orchestrator | Monday 16 March 2026 00:27:00 +0000 (0:00:00.405) 0:00:19.126 ********** 2026-03-16 00:27:06.837203 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:27:06.837214 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:27:06.837225 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:27:06.837235 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:27:06.837246 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:27:06.837257 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:27:06.837267 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:27:06.837278 | orchestrator | 2026-03-16 00:27:06.837289 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-16 00:27:06.837300 | orchestrator | Monday 16 March 2026 00:27:01 +0000 (0:00:01.369) 0:00:20.496 ********** 2026-03-16 00:27:06.837310 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.837321 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:06.837332 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:06.837342 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:06.837353 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:06.837364 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:06.837376 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:06.837394 | orchestrator | 2026-03-16 00:27:06.837412 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-16 00:27:06.837430 | orchestrator | Monday 16 March 2026 00:27:02 +0000 (0:00:00.250) 0:00:20.747 ********** 2026-03-16 00:27:06.837474 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.837493 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:06.837510 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:06.837528 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:06.837539 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:06.837550 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:06.837560 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:06.837570 | orchestrator | 2026-03-16 00:27:06.837581 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-16 00:27:06.837592 | orchestrator | Monday 16 March 2026 00:27:02 +0000 (0:00:00.240) 0:00:20.988 ********** 2026-03-16 00:27:06.837603 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.837613 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:06.837624 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:06.837634 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:06.837645 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:06.837655 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:06.837666 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:06.837676 | orchestrator | 2026-03-16 00:27:06.837687 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-16 00:27:06.837698 | orchestrator | Monday 16 March 2026 00:27:02 +0000 (0:00:00.233) 0:00:21.221 ********** 2026-03-16 00:27:06.837709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:27:06.837722 | orchestrator | 2026-03-16 00:27:06.837732 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-16 00:27:06.837743 | orchestrator | Monday 16 March 2026 00:27:02 +0000 (0:00:00.313) 0:00:21.535 ********** 2026-03-16 00:27:06.837754 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.837764 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:06.837784 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:06.837795 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:06.837805 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:06.837816 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:06.837826 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:06.837837 | orchestrator | 2026-03-16 00:27:06.837848 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-16 00:27:06.837859 | orchestrator | Monday 16 March 2026 00:27:03 +0000 (0:00:00.569) 0:00:22.104 ********** 2026-03-16 00:27:06.837870 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:27:06.837880 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:27:06.837891 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:27:06.837902 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:27:06.837912 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:27:06.837923 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:27:06.837934 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:27:06.837944 | orchestrator | 2026-03-16 00:27:06.837955 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-16 00:27:06.837966 | orchestrator | Monday 16 March 2026 00:27:03 +0000 (0:00:00.220) 0:00:22.325 ********** 2026-03-16 00:27:06.837977 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.837988 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:06.837998 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:06.838009 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:06.838080 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:27:06.838092 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:27:06.838103 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:27:06.838149 | orchestrator | 2026-03-16 00:27:06.838160 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-16 00:27:06.838171 | orchestrator | Monday 16 March 2026 00:27:04 +0000 (0:00:01.147) 0:00:23.473 ********** 2026-03-16 00:27:06.838182 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.838193 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:06.838204 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:06.838214 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:06.838229 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:06.838259 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:06.838280 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:06.838299 | orchestrator | 2026-03-16 00:27:06.838317 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-16 00:27:06.838336 | orchestrator | Monday 16 March 2026 00:27:05 +0000 (0:00:00.716) 0:00:24.189 ********** 2026-03-16 00:27:06.838351 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:06.838369 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:06.838388 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:06.838407 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:06.838464 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:27:48.204184 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:27:48.204289 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:27:48.204303 | orchestrator | 2026-03-16 00:27:48.204315 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-16 00:27:48.204326 | orchestrator | Monday 16 March 2026 00:27:06 +0000 (0:00:01.224) 0:00:25.414 ********** 2026-03-16 00:27:48.204336 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:48.204347 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:48.204357 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:48.204367 | orchestrator | changed: [testbed-manager] 2026-03-16 00:27:48.204378 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:27:48.204435 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:27:48.204446 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:27:48.204455 | orchestrator | 2026-03-16 00:27:48.204465 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-16 00:27:48.204475 | orchestrator | Monday 16 March 2026 00:27:24 +0000 (0:00:17.605) 0:00:43.019 ********** 2026-03-16 00:27:48.204485 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:48.204517 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:48.204528 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:48.204537 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:48.204547 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:48.204556 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:48.204566 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:48.204577 | orchestrator | 2026-03-16 00:27:48.204594 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-16 00:27:48.204610 | orchestrator | Monday 16 March 2026 00:27:24 +0000 (0:00:00.211) 0:00:43.231 ********** 2026-03-16 00:27:48.204626 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:48.204642 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:48.204659 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:48.204676 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:48.204692 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:48.204709 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:48.204719 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:48.204730 | orchestrator | 2026-03-16 00:27:48.204742 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-16 00:27:48.204753 | orchestrator | Monday 16 March 2026 00:27:24 +0000 (0:00:00.226) 0:00:43.458 ********** 2026-03-16 00:27:48.204764 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:48.204775 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:48.204786 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:48.204796 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:48.204807 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:48.204818 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:48.204833 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:48.204849 | orchestrator | 2026-03-16 00:27:48.204864 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-16 00:27:48.204878 | orchestrator | Monday 16 March 2026 00:27:25 +0000 (0:00:00.255) 0:00:43.713 ********** 2026-03-16 00:27:48.204901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:27:48.204923 | orchestrator | 2026-03-16 00:27:48.204938 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-16 00:27:48.204954 | orchestrator | Monday 16 March 2026 00:27:25 +0000 (0:00:00.278) 0:00:43.992 ********** 2026-03-16 00:27:48.204971 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:48.204988 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:48.205004 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:48.205021 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:48.205038 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:48.205053 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:48.205068 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:48.205083 | orchestrator | 2026-03-16 00:27:48.205106 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-16 00:27:48.205124 | orchestrator | Monday 16 March 2026 00:27:27 +0000 (0:00:01.995) 0:00:45.987 ********** 2026-03-16 00:27:48.205141 | orchestrator | changed: [testbed-manager] 2026-03-16 00:27:48.205157 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:27:48.205172 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:27:48.205189 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:27:48.205206 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:27:48.205222 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:27:48.205238 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:27:48.205251 | orchestrator | 2026-03-16 00:27:48.205261 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-16 00:27:48.205286 | orchestrator | Monday 16 March 2026 00:27:28 +0000 (0:00:01.172) 0:00:47.160 ********** 2026-03-16 00:27:48.205299 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:48.205316 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:48.205332 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:48.205362 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:48.205378 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:48.205423 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:48.205439 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:48.205456 | orchestrator | 2026-03-16 00:27:48.205472 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-16 00:27:48.205488 | orchestrator | Monday 16 March 2026 00:27:29 +0000 (0:00:01.040) 0:00:48.200 ********** 2026-03-16 00:27:48.205506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:27:48.205523 | orchestrator | 2026-03-16 00:27:48.205541 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-16 00:27:48.205558 | orchestrator | Monday 16 March 2026 00:27:29 +0000 (0:00:00.295) 0:00:48.495 ********** 2026-03-16 00:27:48.205575 | orchestrator | changed: [testbed-manager] 2026-03-16 00:27:48.205591 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:27:48.205606 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:27:48.205621 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:27:48.205637 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:27:48.205668 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:27:48.205686 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:27:48.205701 | orchestrator | 2026-03-16 00:27:48.205742 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-16 00:27:48.205758 | orchestrator | Monday 16 March 2026 00:27:31 +0000 (0:00:01.112) 0:00:49.607 ********** 2026-03-16 00:27:48.205774 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:27:48.205789 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:27:48.205804 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:27:48.205819 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:27:48.205833 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:27:48.205848 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:27:48.205863 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:27:48.205877 | orchestrator | 2026-03-16 00:27:48.205893 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-16 00:27:48.205907 | orchestrator | Monday 16 March 2026 00:27:31 +0000 (0:00:00.246) 0:00:49.854 ********** 2026-03-16 00:27:48.205922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:27:48.205936 | orchestrator | 2026-03-16 00:27:48.205951 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-16 00:27:48.205965 | orchestrator | Monday 16 March 2026 00:27:31 +0000 (0:00:00.356) 0:00:50.210 ********** 2026-03-16 00:27:48.205978 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:48.205993 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:48.206008 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:48.206121 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:48.206140 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:48.206156 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:48.206173 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:48.206189 | orchestrator | 2026-03-16 00:27:48.206205 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-16 00:27:48.206220 | orchestrator | Monday 16 March 2026 00:27:33 +0000 (0:00:02.086) 0:00:52.297 ********** 2026-03-16 00:27:48.206235 | orchestrator | changed: [testbed-manager] 2026-03-16 00:27:48.206251 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:27:48.206265 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:27:48.206282 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:27:48.206298 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:27:48.206315 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:27:48.206331 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:27:48.206371 | orchestrator | 2026-03-16 00:27:48.206432 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-16 00:27:48.206450 | orchestrator | Monday 16 March 2026 00:27:34 +0000 (0:00:01.105) 0:00:53.402 ********** 2026-03-16 00:27:48.206467 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:27:48.206483 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:27:48.206497 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:27:48.206513 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:27:48.206526 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:27:48.206542 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:27:48.206556 | orchestrator | changed: [testbed-manager] 2026-03-16 00:27:48.206571 | orchestrator | 2026-03-16 00:27:48.206587 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-16 00:27:48.206602 | orchestrator | Monday 16 March 2026 00:27:45 +0000 (0:00:10.401) 0:01:03.804 ********** 2026-03-16 00:27:48.206617 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:48.206632 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:48.206646 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:48.206661 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:48.206676 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:48.206692 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:48.206707 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:48.206722 | orchestrator | 2026-03-16 00:27:48.206740 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-16 00:27:48.206757 | orchestrator | Monday 16 March 2026 00:27:46 +0000 (0:00:01.433) 0:01:05.237 ********** 2026-03-16 00:27:48.206775 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:48.206792 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:48.206807 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:48.206823 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:48.206838 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:48.206853 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:48.206869 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:48.206884 | orchestrator | 2026-03-16 00:27:48.206901 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-16 00:27:48.206918 | orchestrator | Monday 16 March 2026 00:27:47 +0000 (0:00:00.977) 0:01:06.215 ********** 2026-03-16 00:27:48.206950 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:48.206966 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:48.206983 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:48.207000 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:48.207017 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:48.207034 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:48.207051 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:48.207068 | orchestrator | 2026-03-16 00:27:48.207085 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-16 00:27:48.207104 | orchestrator | Monday 16 March 2026 00:27:47 +0000 (0:00:00.182) 0:01:06.397 ********** 2026-03-16 00:27:48.207121 | orchestrator | ok: [testbed-manager] 2026-03-16 00:27:48.207138 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:27:48.207156 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:27:48.207172 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:27:48.207189 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:27:48.207207 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:27:48.207224 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:27:48.207242 | orchestrator | 2026-03-16 00:27:48.207259 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-16 00:27:48.207276 | orchestrator | Monday 16 March 2026 00:27:47 +0000 (0:00:00.166) 0:01:06.564 ********** 2026-03-16 00:27:48.207295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:27:48.207314 | orchestrator | 2026-03-16 00:27:48.207355 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-16 00:30:04.156943 | orchestrator | Monday 16 March 2026 00:27:48 +0000 (0:00:00.226) 0:01:06.791 ********** 2026-03-16 00:30:04.157054 | orchestrator | ok: [testbed-manager] 2026-03-16 00:30:04.157073 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:04.157084 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:04.157096 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:04.157106 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:04.157117 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:04.157128 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:04.157139 | orchestrator | 2026-03-16 00:30:04.157151 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-16 00:30:04.157163 | orchestrator | Monday 16 March 2026 00:27:50 +0000 (0:00:01.853) 0:01:08.645 ********** 2026-03-16 00:30:04.157230 | orchestrator | changed: [testbed-manager] 2026-03-16 00:30:04.157243 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:30:04.157254 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:30:04.157265 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:30:04.157276 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:30:04.157286 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:30:04.157297 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:30:04.157308 | orchestrator | 2026-03-16 00:30:04.157319 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-16 00:30:04.157331 | orchestrator | Monday 16 March 2026 00:27:50 +0000 (0:00:00.560) 0:01:09.205 ********** 2026-03-16 00:30:04.157342 | orchestrator | ok: [testbed-manager] 2026-03-16 00:30:04.157353 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:04.157364 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:04.157374 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:04.157385 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:04.157396 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:04.157407 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:04.157417 | orchestrator | 2026-03-16 00:30:04.157429 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-16 00:30:04.157440 | orchestrator | Monday 16 March 2026 00:27:50 +0000 (0:00:00.188) 0:01:09.394 ********** 2026-03-16 00:30:04.157451 | orchestrator | ok: [testbed-manager] 2026-03-16 00:30:04.157462 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:04.157473 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:04.157487 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:04.157499 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:04.157512 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:04.157525 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:04.157539 | orchestrator | 2026-03-16 00:30:04.157552 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-16 00:30:04.157565 | orchestrator | Monday 16 March 2026 00:27:52 +0000 (0:00:01.388) 0:01:10.782 ********** 2026-03-16 00:30:04.157578 | orchestrator | changed: [testbed-manager] 2026-03-16 00:30:04.157591 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:30:04.157605 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:30:04.157619 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:30:04.157633 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:30:04.157645 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:30:04.157658 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:30:04.157671 | orchestrator | 2026-03-16 00:30:04.157688 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-16 00:30:04.157701 | orchestrator | Monday 16 March 2026 00:27:55 +0000 (0:00:03.053) 0:01:13.836 ********** 2026-03-16 00:30:04.157714 | orchestrator | ok: [testbed-manager] 2026-03-16 00:30:04.157727 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:04.157740 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:04.157754 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:04.157767 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:04.157780 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:04.157793 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:04.157807 | orchestrator | 2026-03-16 00:30:04.157820 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-16 00:30:04.157858 | orchestrator | Monday 16 March 2026 00:27:58 +0000 (0:00:02.938) 0:01:16.775 ********** 2026-03-16 00:30:04.157870 | orchestrator | ok: [testbed-manager] 2026-03-16 00:30:04.157881 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:04.157892 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:04.157902 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:04.157913 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:04.157923 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:04.157934 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:04.157945 | orchestrator | 2026-03-16 00:30:04.157955 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-16 00:30:04.157966 | orchestrator | Monday 16 March 2026 00:28:29 +0000 (0:00:31.082) 0:01:47.857 ********** 2026-03-16 00:30:04.157977 | orchestrator | changed: [testbed-manager] 2026-03-16 00:30:04.157988 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:30:04.157999 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:30:04.158010 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:30:04.158082 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:30:04.158094 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:30:04.158106 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:30:04.158116 | orchestrator | 2026-03-16 00:30:04.158127 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-16 00:30:04.158139 | orchestrator | Monday 16 March 2026 00:29:48 +0000 (0:01:18.848) 0:03:06.706 ********** 2026-03-16 00:30:04.158150 | orchestrator | ok: [testbed-manager] 2026-03-16 00:30:04.158161 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:04.158224 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:04.158236 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:04.158247 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:04.158257 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:04.158268 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:04.158279 | orchestrator | 2026-03-16 00:30:04.158290 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-16 00:30:04.158301 | orchestrator | Monday 16 March 2026 00:29:50 +0000 (0:00:02.048) 0:03:08.754 ********** 2026-03-16 00:30:04.158313 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:04.158324 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:04.158335 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:04.158345 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:04.158356 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:04.158367 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:04.158378 | orchestrator | changed: [testbed-manager] 2026-03-16 00:30:04.158389 | orchestrator | 2026-03-16 00:30:04.158400 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-16 00:30:04.158411 | orchestrator | Monday 16 March 2026 00:30:01 +0000 (0:00:11.769) 0:03:20.524 ********** 2026-03-16 00:30:04.158459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-16 00:30:04.158495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-16 00:30:04.158520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-16 00:30:04.158533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-16 00:30:04.158545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-16 00:30:04.158556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-16 00:30:04.158567 | orchestrator | 2026-03-16 00:30:04.158579 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-16 00:30:04.158590 | orchestrator | Monday 16 March 2026 00:30:02 +0000 (0:00:00.400) 0:03:20.925 ********** 2026-03-16 00:30:04.158601 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-16 00:30:04.158613 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:30:04.158632 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-16 00:30:04.158651 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:30:04.158677 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-16 00:30:04.158710 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-16 00:30:04.158728 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:30:04.158747 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:30:04.158765 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-16 00:30:04.158782 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-16 00:30:04.158798 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-16 00:30:04.158815 | orchestrator | 2026-03-16 00:30:04.158834 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-16 00:30:04.158851 | orchestrator | Monday 16 March 2026 00:30:04 +0000 (0:00:01.715) 0:03:22.640 ********** 2026-03-16 00:30:04.158870 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-16 00:30:04.158891 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-16 00:30:04.158910 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-16 00:30:04.158929 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-16 00:30:04.158948 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-16 00:30:04.158979 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-16 00:30:10.217592 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-16 00:30:10.217706 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-16 00:30:10.217761 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-16 00:30:10.217785 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-16 00:30:10.217804 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-16 00:30:10.217822 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-16 00:30:10.217841 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-16 00:30:10.217859 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-16 00:30:10.217877 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-16 00:30:10.217896 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-16 00:30:10.217914 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-16 00:30:10.217932 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-16 00:30:10.217951 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-16 00:30:10.217968 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-16 00:30:10.217987 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-16 00:30:10.218007 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-16 00:30:10.218088 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-16 00:30:10.218108 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-16 00:30:10.218127 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-16 00:30:10.218146 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-16 00:30:10.218185 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:30:10.218206 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-16 00:30:10.218225 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-16 00:30:10.218243 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-16 00:30:10.218262 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-16 00:30:10.218279 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-16 00:30:10.218297 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-16 00:30:10.218315 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-16 00:30:10.218334 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:30:10.218352 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-16 00:30:10.218384 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-16 00:30:10.218402 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-16 00:30:10.218421 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-16 00:30:10.218438 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-16 00:30:10.218457 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-16 00:30:10.218490 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-16 00:30:10.218510 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:30:10.218528 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:30:10.218546 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-16 00:30:10.218564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-16 00:30:10.218583 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-16 00:30:10.218601 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-16 00:30:10.218619 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-16 00:30:10.218658 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-16 00:30:10.218677 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-16 00:30:10.218697 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-16 00:30:10.218716 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-16 00:30:10.218733 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-16 00:30:10.218751 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-16 00:30:10.218770 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-16 00:30:10.218788 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-16 00:30:10.218807 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-16 00:30:10.218825 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-16 00:30:10.218843 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-16 00:30:10.218861 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-16 00:30:10.218879 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-16 00:30:10.218897 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-16 00:30:10.218914 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-16 00:30:10.218932 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-16 00:30:10.218950 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-16 00:30:10.218967 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-16 00:30:10.218984 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-16 00:30:10.219002 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-16 00:30:10.219021 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-16 00:30:10.219039 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-16 00:30:10.219057 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-16 00:30:10.219075 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-16 00:30:10.219094 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-16 00:30:10.219124 | orchestrator | 2026-03-16 00:30:10.219145 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-16 00:30:10.219231 | orchestrator | Monday 16 March 2026 00:30:08 +0000 (0:00:04.102) 0:03:26.743 ********** 2026-03-16 00:30:10.219252 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-16 00:30:10.219271 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-16 00:30:10.219288 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-16 00:30:10.219306 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-16 00:30:10.219324 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-16 00:30:10.219352 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-16 00:30:10.219371 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-16 00:30:10.219390 | orchestrator | 2026-03-16 00:30:10.219407 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-16 00:30:10.219427 | orchestrator | Monday 16 March 2026 00:30:08 +0000 (0:00:00.563) 0:03:27.306 ********** 2026-03-16 00:30:10.219444 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-16 00:30:10.219462 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:30:10.219481 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-16 00:30:10.219500 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-16 00:30:10.219518 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:30:10.219537 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:30:10.219555 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-16 00:30:10.219593 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:30:10.219612 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-16 00:30:10.219630 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-16 00:30:10.219661 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-16 00:30:22.759759 | orchestrator | 2026-03-16 00:30:22.759863 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-16 00:30:22.759880 | orchestrator | Monday 16 March 2026 00:30:10 +0000 (0:00:01.492) 0:03:28.799 ********** 2026-03-16 00:30:22.759892 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-16 00:30:22.759906 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:30:22.759918 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-16 00:30:22.759930 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-16 00:30:22.759941 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:30:22.759952 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-16 00:30:22.759962 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:30:22.759973 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:30:22.759984 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-16 00:30:22.759995 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-16 00:30:22.760006 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-16 00:30:22.760017 | orchestrator | 2026-03-16 00:30:22.760028 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-16 00:30:22.760059 | orchestrator | Monday 16 March 2026 00:30:10 +0000 (0:00:00.566) 0:03:29.366 ********** 2026-03-16 00:30:22.760071 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-16 00:30:22.760082 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:30:22.760093 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-16 00:30:22.760104 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-16 00:30:22.760115 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:30:22.760125 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:30:22.760136 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-16 00:30:22.760191 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:30:22.760202 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-16 00:30:22.760213 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-16 00:30:22.760224 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-16 00:30:22.760235 | orchestrator | 2026-03-16 00:30:22.760246 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-16 00:30:22.760257 | orchestrator | Monday 16 March 2026 00:30:11 +0000 (0:00:00.511) 0:03:29.877 ********** 2026-03-16 00:30:22.760268 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:30:22.760279 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:30:22.760289 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:30:22.760300 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:30:22.760311 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:30:22.760321 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:30:22.760332 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:30:22.760343 | orchestrator | 2026-03-16 00:30:22.760354 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-16 00:30:22.760365 | orchestrator | Monday 16 March 2026 00:30:11 +0000 (0:00:00.313) 0:03:30.191 ********** 2026-03-16 00:30:22.760376 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:22.760388 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:22.760398 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:22.760409 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:22.760420 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:22.760430 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:22.760441 | orchestrator | ok: [testbed-manager] 2026-03-16 00:30:22.760451 | orchestrator | 2026-03-16 00:30:22.760462 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-16 00:30:22.760473 | orchestrator | Monday 16 March 2026 00:30:16 +0000 (0:00:05.036) 0:03:35.227 ********** 2026-03-16 00:30:22.760484 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-16 00:30:22.760495 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:30:22.760506 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-16 00:30:22.760517 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-16 00:30:22.760528 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:30:22.760538 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:30:22.760549 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-16 00:30:22.760560 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-16 00:30:22.760571 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:30:22.760581 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:30:22.760603 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-16 00:30:22.760615 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:30:22.760626 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-16 00:30:22.760637 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:30:22.760647 | orchestrator | 2026-03-16 00:30:22.760666 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-16 00:30:22.760678 | orchestrator | Monday 16 March 2026 00:30:16 +0000 (0:00:00.288) 0:03:35.516 ********** 2026-03-16 00:30:22.760688 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-16 00:30:22.760699 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-16 00:30:22.760710 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-16 00:30:22.760737 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-16 00:30:22.760749 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-16 00:30:22.760760 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-16 00:30:22.760770 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-16 00:30:22.760781 | orchestrator | 2026-03-16 00:30:22.760791 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-16 00:30:22.760802 | orchestrator | Monday 16 March 2026 00:30:18 +0000 (0:00:01.110) 0:03:36.626 ********** 2026-03-16 00:30:22.760815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:30:22.760828 | orchestrator | 2026-03-16 00:30:22.760839 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-16 00:30:22.760850 | orchestrator | Monday 16 March 2026 00:30:18 +0000 (0:00:00.496) 0:03:37.123 ********** 2026-03-16 00:30:22.760860 | orchestrator | ok: [testbed-manager] 2026-03-16 00:30:22.760871 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:22.760881 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:22.760892 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:22.760903 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:22.760913 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:22.760924 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:22.760934 | orchestrator | 2026-03-16 00:30:22.760945 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-16 00:30:22.760956 | orchestrator | Monday 16 March 2026 00:30:19 +0000 (0:00:01.409) 0:03:38.532 ********** 2026-03-16 00:30:22.760966 | orchestrator | ok: [testbed-manager] 2026-03-16 00:30:22.760977 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:22.760987 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:22.760997 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:22.761008 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:22.761018 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:22.761029 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:22.761039 | orchestrator | 2026-03-16 00:30:22.761050 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-16 00:30:22.761061 | orchestrator | Monday 16 March 2026 00:30:20 +0000 (0:00:00.592) 0:03:39.125 ********** 2026-03-16 00:30:22.761071 | orchestrator | changed: [testbed-manager] 2026-03-16 00:30:22.761082 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:30:22.761092 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:30:22.761103 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:30:22.761114 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:30:22.761125 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:30:22.761135 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:30:22.761173 | orchestrator | 2026-03-16 00:30:22.761190 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-16 00:30:22.761206 | orchestrator | Monday 16 March 2026 00:30:21 +0000 (0:00:00.594) 0:03:39.719 ********** 2026-03-16 00:30:22.761223 | orchestrator | ok: [testbed-manager] 2026-03-16 00:30:22.761239 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:22.761271 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:22.761289 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:22.761307 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:22.761325 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:22.761342 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:22.761352 | orchestrator | 2026-03-16 00:30:22.761363 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-16 00:30:22.761382 | orchestrator | Monday 16 March 2026 00:30:21 +0000 (0:00:00.600) 0:03:40.319 ********** 2026-03-16 00:30:22.761402 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773619479.4752984, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:22.761417 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773619541.3624303, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:22.761436 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773619516.7576244, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:22.761485 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773619523.5571015, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:27.754542 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773619527.221988, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:27.754651 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773619516.135499, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:27.754667 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773619530.4018662, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:27.754703 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:27.754730 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:27.754743 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:27.754754 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:27.754792 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:27.754804 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:27.754816 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 00:30:27.754836 | orchestrator | 2026-03-16 00:30:27.754849 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-16 00:30:27.754862 | orchestrator | Monday 16 March 2026 00:30:22 +0000 (0:00:01.022) 0:03:41.342 ********** 2026-03-16 00:30:27.754881 | orchestrator | changed: [testbed-manager] 2026-03-16 00:30:27.754901 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:30:27.754929 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:30:27.754949 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:30:27.754968 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:30:27.754986 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:30:27.755003 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:30:27.755020 | orchestrator | 2026-03-16 00:30:27.755038 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-16 00:30:27.755057 | orchestrator | Monday 16 March 2026 00:30:23 +0000 (0:00:01.129) 0:03:42.471 ********** 2026-03-16 00:30:27.755075 | orchestrator | changed: [testbed-manager] 2026-03-16 00:30:27.755094 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:30:27.755113 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:30:27.755159 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:30:27.755180 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:30:27.755197 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:30:27.755216 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:30:27.755237 | orchestrator | 2026-03-16 00:30:27.755267 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-16 00:30:27.755285 | orchestrator | Monday 16 March 2026 00:30:25 +0000 (0:00:01.186) 0:03:43.658 ********** 2026-03-16 00:30:27.755299 | orchestrator | changed: [testbed-manager] 2026-03-16 00:30:27.755312 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:30:27.755324 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:30:27.755337 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:30:27.755349 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:30:27.755362 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:30:27.755373 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:30:27.755392 | orchestrator | 2026-03-16 00:30:27.755411 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-16 00:30:27.755429 | orchestrator | Monday 16 March 2026 00:30:26 +0000 (0:00:01.145) 0:03:44.804 ********** 2026-03-16 00:30:27.755449 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:30:27.755467 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:30:27.755486 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:30:27.755497 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:30:27.755508 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:30:27.755519 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:30:27.755529 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:30:27.755540 | orchestrator | 2026-03-16 00:30:27.755550 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-16 00:30:27.755561 | orchestrator | Monday 16 March 2026 00:30:26 +0000 (0:00:00.288) 0:03:45.092 ********** 2026-03-16 00:30:27.755572 | orchestrator | ok: [testbed-manager] 2026-03-16 00:30:27.755584 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:30:27.755594 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:30:27.755605 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:30:27.755615 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:30:27.755626 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:30:27.755637 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:30:27.755647 | orchestrator | 2026-03-16 00:30:27.755658 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-16 00:30:27.755669 | orchestrator | Monday 16 March 2026 00:30:27 +0000 (0:00:00.813) 0:03:45.906 ********** 2026-03-16 00:30:27.755681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:30:27.755704 | orchestrator | 2026-03-16 00:30:27.755716 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-16 00:30:27.755738 | orchestrator | Monday 16 March 2026 00:30:27 +0000 (0:00:00.429) 0:03:46.336 ********** 2026-03-16 00:31:47.994541 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:47.994641 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:31:47.994654 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:31:47.994661 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:31:47.994669 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:31:47.994676 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:31:47.994684 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:31:47.994691 | orchestrator | 2026-03-16 00:31:47.994699 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-16 00:31:47.994708 | orchestrator | Monday 16 March 2026 00:30:36 +0000 (0:00:08.956) 0:03:55.292 ********** 2026-03-16 00:31:47.994715 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:47.994722 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:47.994730 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:47.994737 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:47.994744 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:47.994750 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:47.994759 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:47.994768 | orchestrator | 2026-03-16 00:31:47.994775 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-16 00:31:47.994782 | orchestrator | Monday 16 March 2026 00:30:37 +0000 (0:00:01.241) 0:03:56.533 ********** 2026-03-16 00:31:47.994789 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:47.994796 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:47.994803 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:47.994810 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:47.994817 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:47.994824 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:47.994831 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:47.994838 | orchestrator | 2026-03-16 00:31:47.994844 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-16 00:31:47.994851 | orchestrator | Monday 16 March 2026 00:30:39 +0000 (0:00:01.174) 0:03:57.708 ********** 2026-03-16 00:31:47.994858 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:47.994866 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:47.994873 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:47.994879 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:47.994887 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:47.994894 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:47.994901 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:47.994908 | orchestrator | 2026-03-16 00:31:47.994915 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-16 00:31:47.994923 | orchestrator | Monday 16 March 2026 00:30:39 +0000 (0:00:00.269) 0:03:57.978 ********** 2026-03-16 00:31:47.994931 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:47.994937 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:47.994944 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:47.994998 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:47.995004 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:47.995010 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:47.995017 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:47.995024 | orchestrator | 2026-03-16 00:31:47.995031 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-16 00:31:47.995038 | orchestrator | Monday 16 March 2026 00:30:39 +0000 (0:00:00.292) 0:03:58.270 ********** 2026-03-16 00:31:47.995045 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:47.995052 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:47.995059 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:47.995066 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:47.995095 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:47.995103 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:47.995110 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:47.995118 | orchestrator | 2026-03-16 00:31:47.995126 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-16 00:31:47.995134 | orchestrator | Monday 16 March 2026 00:30:39 +0000 (0:00:00.290) 0:03:58.561 ********** 2026-03-16 00:31:47.995142 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:47.995150 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:47.995158 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:47.995166 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:47.995173 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:47.995181 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:47.995188 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:47.995196 | orchestrator | 2026-03-16 00:31:47.995203 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-16 00:31:47.995211 | orchestrator | Monday 16 March 2026 00:30:45 +0000 (0:00:05.448) 0:04:04.010 ********** 2026-03-16 00:31:47.995224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:31:47.995233 | orchestrator | 2026-03-16 00:31:47.995240 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-16 00:31:47.995247 | orchestrator | Monday 16 March 2026 00:30:45 +0000 (0:00:00.341) 0:04:04.351 ********** 2026-03-16 00:31:47.995255 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-16 00:31:47.995261 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-16 00:31:47.995268 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:31:47.995276 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-16 00:31:47.995283 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-16 00:31:47.995305 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-16 00:31:47.995313 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:31:47.995320 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-16 00:31:47.995327 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-16 00:31:47.995333 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:31:47.995340 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-16 00:31:47.995347 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-16 00:31:47.995354 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:31:47.995361 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-16 00:31:47.995368 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-16 00:31:47.995375 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-16 00:31:47.995397 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:31:47.995405 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:31:47.995412 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-16 00:31:47.995419 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-16 00:31:47.995427 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:31:47.995434 | orchestrator | 2026-03-16 00:31:47.995441 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-16 00:31:47.995460 | orchestrator | Monday 16 March 2026 00:30:46 +0000 (0:00:00.297) 0:04:04.649 ********** 2026-03-16 00:31:47.995468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:31:47.995475 | orchestrator | 2026-03-16 00:31:47.995483 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-16 00:31:47.995497 | orchestrator | Monday 16 March 2026 00:30:46 +0000 (0:00:00.355) 0:04:05.005 ********** 2026-03-16 00:31:47.995512 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-16 00:31:47.995520 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-16 00:31:47.995526 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:31:47.995534 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-16 00:31:47.995541 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:31:47.995548 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-16 00:31:47.995555 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:31:47.995562 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-16 00:31:47.995569 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:31:47.995576 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:31:47.995583 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-16 00:31:47.995590 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:31:47.995597 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-16 00:31:47.995604 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:31:47.995611 | orchestrator | 2026-03-16 00:31:47.995618 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-16 00:31:47.995625 | orchestrator | Monday 16 March 2026 00:30:46 +0000 (0:00:00.282) 0:04:05.287 ********** 2026-03-16 00:31:47.995633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:31:47.995640 | orchestrator | 2026-03-16 00:31:47.995647 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-16 00:31:47.995659 | orchestrator | Monday 16 March 2026 00:30:47 +0000 (0:00:00.396) 0:04:05.683 ********** 2026-03-16 00:31:47.995666 | orchestrator | changed: [testbed-manager] 2026-03-16 00:31:47.995674 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:31:47.995681 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:31:47.995688 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:31:47.995698 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:31:47.995706 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:31:47.995713 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:31:47.995720 | orchestrator | 2026-03-16 00:31:47.995727 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-16 00:31:47.995734 | orchestrator | Monday 16 March 2026 00:31:22 +0000 (0:00:34.997) 0:04:40.681 ********** 2026-03-16 00:31:47.995741 | orchestrator | changed: [testbed-manager] 2026-03-16 00:31:47.995748 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:31:47.995755 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:31:47.995763 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:31:47.995769 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:31:47.995776 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:31:47.995783 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:31:47.995790 | orchestrator | 2026-03-16 00:31:47.995798 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-16 00:31:47.995805 | orchestrator | Monday 16 March 2026 00:31:31 +0000 (0:00:09.307) 0:04:49.988 ********** 2026-03-16 00:31:47.995812 | orchestrator | changed: [testbed-manager] 2026-03-16 00:31:47.995819 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:31:47.995826 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:31:47.995833 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:31:47.995840 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:31:47.995847 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:31:47.995854 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:31:47.995861 | orchestrator | 2026-03-16 00:31:47.995868 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-16 00:31:47.995880 | orchestrator | Monday 16 March 2026 00:31:39 +0000 (0:00:08.159) 0:04:58.148 ********** 2026-03-16 00:31:47.995887 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:47.995895 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:47.995901 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:47.995908 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:47.995915 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:47.995922 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:47.995929 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:47.995936 | orchestrator | 2026-03-16 00:31:47.995942 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-16 00:31:47.996023 | orchestrator | Monday 16 March 2026 00:31:41 +0000 (0:00:02.038) 0:05:00.186 ********** 2026-03-16 00:31:47.996031 | orchestrator | changed: [testbed-manager] 2026-03-16 00:31:47.996038 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:31:47.996045 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:31:47.996052 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:31:47.996059 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:31:47.996066 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:31:47.996073 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:31:47.996080 | orchestrator | 2026-03-16 00:31:47.996095 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-16 00:31:59.824150 | orchestrator | Monday 16 March 2026 00:31:47 +0000 (0:00:06.385) 0:05:06.572 ********** 2026-03-16 00:31:59.824250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:31:59.824265 | orchestrator | 2026-03-16 00:31:59.824275 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-16 00:31:59.824284 | orchestrator | Monday 16 March 2026 00:31:48 +0000 (0:00:00.593) 0:05:07.165 ********** 2026-03-16 00:31:59.824293 | orchestrator | changed: [testbed-manager] 2026-03-16 00:31:59.824303 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:31:59.824311 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:31:59.824320 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:31:59.824329 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:31:59.824339 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:31:59.824353 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:31:59.824367 | orchestrator | 2026-03-16 00:31:59.824380 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-16 00:31:59.824394 | orchestrator | Monday 16 March 2026 00:31:49 +0000 (0:00:00.765) 0:05:07.931 ********** 2026-03-16 00:31:59.824408 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:59.824423 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:59.824437 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:59.824451 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:59.824465 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:59.824478 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:59.824492 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:59.824505 | orchestrator | 2026-03-16 00:31:59.824519 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-16 00:31:59.824534 | orchestrator | Monday 16 March 2026 00:31:51 +0000 (0:00:01.780) 0:05:09.711 ********** 2026-03-16 00:31:59.824548 | orchestrator | changed: [testbed-manager] 2026-03-16 00:31:59.824562 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:31:59.824577 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:31:59.824592 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:31:59.824606 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:31:59.824621 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:31:59.824637 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:31:59.824652 | orchestrator | 2026-03-16 00:31:59.824667 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-16 00:31:59.824682 | orchestrator | Monday 16 March 2026 00:31:51 +0000 (0:00:00.876) 0:05:10.588 ********** 2026-03-16 00:31:59.824726 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:31:59.824742 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:31:59.824757 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:31:59.824771 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:31:59.824786 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:31:59.824798 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:31:59.824807 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:31:59.824816 | orchestrator | 2026-03-16 00:31:59.824824 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-16 00:31:59.824833 | orchestrator | Monday 16 March 2026 00:31:52 +0000 (0:00:00.307) 0:05:10.896 ********** 2026-03-16 00:31:59.824842 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:31:59.824850 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:31:59.824859 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:31:59.824881 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:31:59.824890 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:31:59.824899 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:31:59.824907 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:31:59.824916 | orchestrator | 2026-03-16 00:31:59.824924 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-16 00:31:59.824961 | orchestrator | Monday 16 March 2026 00:31:52 +0000 (0:00:00.436) 0:05:11.333 ********** 2026-03-16 00:31:59.824970 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:59.824979 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:59.824988 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:59.824996 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:59.825005 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:59.825018 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:59.825032 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:59.825049 | orchestrator | 2026-03-16 00:31:59.825070 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-16 00:31:59.825083 | orchestrator | Monday 16 March 2026 00:31:53 +0000 (0:00:00.306) 0:05:11.640 ********** 2026-03-16 00:31:59.825097 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:31:59.825111 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:31:59.825126 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:31:59.825142 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:31:59.825156 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:31:59.825169 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:31:59.825189 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:31:59.825209 | orchestrator | 2026-03-16 00:31:59.825223 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-16 00:31:59.825238 | orchestrator | Monday 16 March 2026 00:31:53 +0000 (0:00:00.332) 0:05:11.972 ********** 2026-03-16 00:31:59.825252 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:59.825265 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:59.825279 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:59.825291 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:59.825303 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:59.825315 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:59.825327 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:59.825341 | orchestrator | 2026-03-16 00:31:59.825354 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-16 00:31:59.825367 | orchestrator | Monday 16 March 2026 00:31:53 +0000 (0:00:00.356) 0:05:12.328 ********** 2026-03-16 00:31:59.825381 | orchestrator | ok: [testbed-manager] =>  2026-03-16 00:31:59.825394 | orchestrator |  docker_version: 5:27.5.1 2026-03-16 00:31:59.825407 | orchestrator | ok: [testbed-node-3] =>  2026-03-16 00:31:59.825420 | orchestrator |  docker_version: 5:27.5.1 2026-03-16 00:31:59.825432 | orchestrator | ok: [testbed-node-4] =>  2026-03-16 00:31:59.825448 | orchestrator |  docker_version: 5:27.5.1 2026-03-16 00:31:59.825461 | orchestrator | ok: [testbed-node-5] =>  2026-03-16 00:31:59.825475 | orchestrator |  docker_version: 5:27.5.1 2026-03-16 00:31:59.825513 | orchestrator | ok: [testbed-node-0] =>  2026-03-16 00:31:59.825542 | orchestrator |  docker_version: 5:27.5.1 2026-03-16 00:31:59.825551 | orchestrator | ok: [testbed-node-1] =>  2026-03-16 00:31:59.825560 | orchestrator |  docker_version: 5:27.5.1 2026-03-16 00:31:59.825568 | orchestrator | ok: [testbed-node-2] =>  2026-03-16 00:31:59.825577 | orchestrator |  docker_version: 5:27.5.1 2026-03-16 00:31:59.825585 | orchestrator | 2026-03-16 00:31:59.825594 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-16 00:31:59.825603 | orchestrator | Monday 16 March 2026 00:31:54 +0000 (0:00:00.313) 0:05:12.642 ********** 2026-03-16 00:31:59.825611 | orchestrator | ok: [testbed-manager] =>  2026-03-16 00:31:59.825620 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-16 00:31:59.825629 | orchestrator | ok: [testbed-node-3] =>  2026-03-16 00:31:59.825637 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-16 00:31:59.825646 | orchestrator | ok: [testbed-node-4] =>  2026-03-16 00:31:59.825654 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-16 00:31:59.825662 | orchestrator | ok: [testbed-node-5] =>  2026-03-16 00:31:59.825671 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-16 00:31:59.825680 | orchestrator | ok: [testbed-node-0] =>  2026-03-16 00:31:59.825688 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-16 00:31:59.825696 | orchestrator | ok: [testbed-node-1] =>  2026-03-16 00:31:59.825705 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-16 00:31:59.825714 | orchestrator | ok: [testbed-node-2] =>  2026-03-16 00:31:59.825722 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-16 00:31:59.825731 | orchestrator | 2026-03-16 00:31:59.825739 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-16 00:31:59.825748 | orchestrator | Monday 16 March 2026 00:31:54 +0000 (0:00:00.321) 0:05:12.964 ********** 2026-03-16 00:31:59.825756 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:31:59.825765 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:31:59.825774 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:31:59.825782 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:31:59.825791 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:31:59.825799 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:31:59.825808 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:31:59.825816 | orchestrator | 2026-03-16 00:31:59.825825 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-16 00:31:59.825833 | orchestrator | Monday 16 March 2026 00:31:54 +0000 (0:00:00.278) 0:05:13.242 ********** 2026-03-16 00:31:59.825842 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:31:59.825850 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:31:59.825859 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:31:59.825867 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:31:59.825876 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:31:59.825884 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:31:59.825893 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:31:59.825901 | orchestrator | 2026-03-16 00:31:59.825910 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-16 00:31:59.825919 | orchestrator | Monday 16 March 2026 00:31:54 +0000 (0:00:00.291) 0:05:13.533 ********** 2026-03-16 00:31:59.825946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:31:59.825958 | orchestrator | 2026-03-16 00:31:59.825974 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-16 00:31:59.825989 | orchestrator | Monday 16 March 2026 00:31:55 +0000 (0:00:00.435) 0:05:13.969 ********** 2026-03-16 00:31:59.826077 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:59.826099 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:59.826140 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:59.826149 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:59.826158 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:59.826175 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:59.826184 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:59.826192 | orchestrator | 2026-03-16 00:31:59.826201 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-16 00:31:59.826210 | orchestrator | Monday 16 March 2026 00:31:56 +0000 (0:00:01.017) 0:05:14.987 ********** 2026-03-16 00:31:59.826219 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:31:59.826227 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:31:59.826236 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:31:59.826244 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:31:59.826253 | orchestrator | ok: [testbed-manager] 2026-03-16 00:31:59.826261 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:31:59.826270 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:31:59.826278 | orchestrator | 2026-03-16 00:31:59.826287 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-16 00:31:59.826297 | orchestrator | Monday 16 March 2026 00:31:59 +0000 (0:00:03.039) 0:05:18.027 ********** 2026-03-16 00:31:59.826305 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-16 00:31:59.826315 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-16 00:31:59.826323 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-16 00:31:59.826332 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-16 00:31:59.826341 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-16 00:31:59.826350 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-16 00:31:59.826358 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:31:59.826367 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-16 00:31:59.826376 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-16 00:31:59.826385 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-16 00:31:59.826393 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:31:59.826402 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-16 00:31:59.826410 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-16 00:31:59.826419 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-16 00:31:59.826428 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:31:59.826436 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-16 00:31:59.826456 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-16 00:33:02.776920 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-16 00:33:02.777064 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:02.777091 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-16 00:33:02.777111 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-16 00:33:02.777129 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-16 00:33:02.777147 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:02.777166 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:02.777185 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-16 00:33:02.777204 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-16 00:33:02.777222 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-16 00:33:02.777241 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:02.777254 | orchestrator | 2026-03-16 00:33:02.777266 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-16 00:33:02.777279 | orchestrator | Monday 16 March 2026 00:32:00 +0000 (0:00:00.576) 0:05:18.603 ********** 2026-03-16 00:33:02.777290 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:02.777301 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:02.777312 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:02.777323 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:02.777335 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:02.777346 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:02.777357 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:02.777391 | orchestrator | 2026-03-16 00:33:02.777406 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-16 00:33:02.777418 | orchestrator | Monday 16 March 2026 00:32:06 +0000 (0:00:06.899) 0:05:25.502 ********** 2026-03-16 00:33:02.777431 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:02.777443 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:02.777456 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:02.777468 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:02.777480 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:02.777492 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:02.777504 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:02.777517 | orchestrator | 2026-03-16 00:33:02.777530 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-16 00:33:02.777542 | orchestrator | Monday 16 March 2026 00:32:07 +0000 (0:00:01.059) 0:05:26.562 ********** 2026-03-16 00:33:02.777554 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:02.777572 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:02.777591 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:02.777610 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:02.777627 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:02.777646 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:02.777666 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:02.777685 | orchestrator | 2026-03-16 00:33:02.777704 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-16 00:33:02.777722 | orchestrator | Monday 16 March 2026 00:32:16 +0000 (0:00:08.813) 0:05:35.375 ********** 2026-03-16 00:33:02.777740 | orchestrator | changed: [testbed-manager] 2026-03-16 00:33:02.777757 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:02.777775 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:02.777798 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:02.777818 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:02.777863 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:02.777881 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:02.777898 | orchestrator | 2026-03-16 00:33:02.777916 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-16 00:33:02.777934 | orchestrator | Monday 16 March 2026 00:32:20 +0000 (0:00:03.272) 0:05:38.647 ********** 2026-03-16 00:33:02.777954 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:02.777976 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:02.777993 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:02.778014 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:02.778112 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:02.778133 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:02.778150 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:02.778168 | orchestrator | 2026-03-16 00:33:02.778190 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-16 00:33:02.778220 | orchestrator | Monday 16 March 2026 00:32:21 +0000 (0:00:01.335) 0:05:39.983 ********** 2026-03-16 00:33:02.778246 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:02.778274 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:02.778300 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:02.778328 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:02.778355 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:02.778384 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:02.778412 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:02.778435 | orchestrator | 2026-03-16 00:33:02.778452 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-16 00:33:02.778469 | orchestrator | Monday 16 March 2026 00:32:22 +0000 (0:00:01.549) 0:05:41.532 ********** 2026-03-16 00:33:02.778487 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:02.778506 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:33:02.778525 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:02.778545 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:02.778585 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:02.778606 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:02.778627 | orchestrator | changed: [testbed-manager] 2026-03-16 00:33:02.778647 | orchestrator | 2026-03-16 00:33:02.778665 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-16 00:33:02.778683 | orchestrator | Monday 16 March 2026 00:32:23 +0000 (0:00:00.606) 0:05:42.138 ********** 2026-03-16 00:33:02.778703 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:02.778723 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:02.778742 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:02.778762 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:02.778782 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:02.778802 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:02.778822 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:02.778914 | orchestrator | 2026-03-16 00:33:02.778935 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-16 00:33:02.778985 | orchestrator | Monday 16 March 2026 00:32:33 +0000 (0:00:10.382) 0:05:52.521 ********** 2026-03-16 00:33:02.779005 | orchestrator | changed: [testbed-manager] 2026-03-16 00:33:02.779026 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:02.779045 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:02.779065 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:02.779085 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:02.779104 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:02.779124 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:02.779144 | orchestrator | 2026-03-16 00:33:02.779163 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-16 00:33:02.779183 | orchestrator | Monday 16 March 2026 00:32:34 +0000 (0:00:00.954) 0:05:53.475 ********** 2026-03-16 00:33:02.779203 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:02.779224 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:02.779244 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:02.779264 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:02.779284 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:02.779304 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:02.779324 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:02.779344 | orchestrator | 2026-03-16 00:33:02.779363 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-16 00:33:02.779380 | orchestrator | Monday 16 March 2026 00:32:43 +0000 (0:00:08.941) 0:06:02.416 ********** 2026-03-16 00:33:02.779398 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:02.779416 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:02.779433 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:02.779450 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:02.779468 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:02.779488 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:02.779508 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:02.779528 | orchestrator | 2026-03-16 00:33:02.779548 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-16 00:33:02.779567 | orchestrator | Monday 16 March 2026 00:32:56 +0000 (0:00:12.186) 0:06:14.603 ********** 2026-03-16 00:33:02.779588 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-16 00:33:02.779608 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-16 00:33:02.779627 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-16 00:33:02.779644 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-16 00:33:02.779663 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-16 00:33:02.779680 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-16 00:33:02.779697 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-16 00:33:02.779714 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-16 00:33:02.779731 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-16 00:33:02.779747 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-16 00:33:02.779781 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-16 00:33:02.779932 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-16 00:33:02.779960 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-16 00:33:02.779980 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-16 00:33:02.780001 | orchestrator | 2026-03-16 00:33:02.780022 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-16 00:33:02.780043 | orchestrator | Monday 16 March 2026 00:32:57 +0000 (0:00:01.232) 0:06:15.835 ********** 2026-03-16 00:33:02.780070 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:02.780091 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:02.780112 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:33:02.780132 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:02.780152 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:02.780172 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:02.780193 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:02.780213 | orchestrator | 2026-03-16 00:33:02.780233 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-16 00:33:02.780254 | orchestrator | Monday 16 March 2026 00:32:57 +0000 (0:00:00.505) 0:06:16.340 ********** 2026-03-16 00:33:02.780274 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:02.780294 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:02.780315 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:02.780334 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:02.780354 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:02.780374 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:02.780395 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:02.780415 | orchestrator | 2026-03-16 00:33:02.780434 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-16 00:33:02.780454 | orchestrator | Monday 16 March 2026 00:33:01 +0000 (0:00:03.902) 0:06:20.243 ********** 2026-03-16 00:33:02.780471 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:02.780489 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:02.780507 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:33:02.780525 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:02.780543 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:02.780561 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:02.780578 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:02.780596 | orchestrator | 2026-03-16 00:33:02.780615 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-16 00:33:02.780632 | orchestrator | Monday 16 March 2026 00:33:02 +0000 (0:00:00.513) 0:06:20.756 ********** 2026-03-16 00:33:02.780651 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-16 00:33:02.780669 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-16 00:33:02.780688 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:02.780706 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-16 00:33:02.780724 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-16 00:33:02.780742 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:02.780760 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-16 00:33:02.780778 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-16 00:33:02.780796 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:33:02.780853 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-16 00:33:21.983345 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-16 00:33:21.983444 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:21.983456 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-16 00:33:21.983463 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-16 00:33:21.983470 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:21.983500 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-16 00:33:21.983508 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-16 00:33:21.983515 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:21.983522 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-16 00:33:21.983529 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-16 00:33:21.983536 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:21.983543 | orchestrator | 2026-03-16 00:33:21.983552 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-16 00:33:21.983559 | orchestrator | Monday 16 March 2026 00:33:03 +0000 (0:00:00.876) 0:06:21.633 ********** 2026-03-16 00:33:21.983566 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:21.983572 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:21.983579 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:33:21.983585 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:21.983592 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:21.983599 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:21.983605 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:21.983612 | orchestrator | 2026-03-16 00:33:21.983619 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-16 00:33:21.983626 | orchestrator | Monday 16 March 2026 00:33:03 +0000 (0:00:00.496) 0:06:22.130 ********** 2026-03-16 00:33:21.983632 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:21.983638 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:21.983645 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:33:21.983651 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:21.983656 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:21.983663 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:21.983669 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:21.983675 | orchestrator | 2026-03-16 00:33:21.983681 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-16 00:33:21.983688 | orchestrator | Monday 16 March 2026 00:33:04 +0000 (0:00:00.507) 0:06:22.637 ********** 2026-03-16 00:33:21.983694 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:21.983700 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:21.983705 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:33:21.983711 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:21.983717 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:21.983722 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:21.983728 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:21.983734 | orchestrator | 2026-03-16 00:33:21.983740 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-16 00:33:21.983746 | orchestrator | Monday 16 March 2026 00:33:04 +0000 (0:00:00.534) 0:06:23.172 ********** 2026-03-16 00:33:21.983752 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:21.983758 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:33:21.983763 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:33:21.983769 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:33:21.983775 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:33:21.983780 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:33:21.983836 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:33:21.983842 | orchestrator | 2026-03-16 00:33:21.983848 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-16 00:33:21.983855 | orchestrator | Monday 16 March 2026 00:33:06 +0000 (0:00:01.991) 0:06:25.163 ********** 2026-03-16 00:33:21.983864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:33:21.983873 | orchestrator | 2026-03-16 00:33:21.983881 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-16 00:33:21.983888 | orchestrator | Monday 16 March 2026 00:33:07 +0000 (0:00:00.855) 0:06:26.018 ********** 2026-03-16 00:33:21.983909 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:21.983916 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:21.983923 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:21.983931 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:21.983939 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:21.983947 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:21.983955 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:21.983962 | orchestrator | 2026-03-16 00:33:21.983970 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-16 00:33:21.983977 | orchestrator | Monday 16 March 2026 00:33:08 +0000 (0:00:00.839) 0:06:26.857 ********** 2026-03-16 00:33:21.983985 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:21.983992 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:21.983999 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:21.984007 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:21.984014 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:21.984021 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:21.984029 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:21.984036 | orchestrator | 2026-03-16 00:33:21.984043 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-16 00:33:21.984051 | orchestrator | Monday 16 March 2026 00:33:09 +0000 (0:00:00.862) 0:06:27.720 ********** 2026-03-16 00:33:21.984059 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:21.984066 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:21.984073 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:21.984081 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:21.984088 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:21.984096 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:21.984103 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:21.984111 | orchestrator | 2026-03-16 00:33:21.984119 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-16 00:33:21.984145 | orchestrator | Monday 16 March 2026 00:33:10 +0000 (0:00:01.504) 0:06:29.224 ********** 2026-03-16 00:33:21.984152 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:21.984160 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:33:21.984168 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:33:21.984176 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:33:21.984183 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:33:21.984190 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:33:21.984196 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:33:21.984202 | orchestrator | 2026-03-16 00:33:21.984208 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-16 00:33:21.984214 | orchestrator | Monday 16 March 2026 00:33:12 +0000 (0:00:01.451) 0:06:30.676 ********** 2026-03-16 00:33:21.984220 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:21.984227 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:21.984234 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:21.984241 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:21.984248 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:21.984255 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:21.984262 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:21.984268 | orchestrator | 2026-03-16 00:33:21.984275 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-16 00:33:21.984282 | orchestrator | Monday 16 March 2026 00:33:13 +0000 (0:00:01.363) 0:06:32.039 ********** 2026-03-16 00:33:21.984289 | orchestrator | changed: [testbed-manager] 2026-03-16 00:33:21.984296 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:21.984303 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:21.984309 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:21.984316 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:21.984322 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:21.984329 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:21.984336 | orchestrator | 2026-03-16 00:33:21.984349 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-16 00:33:21.984356 | orchestrator | Monday 16 March 2026 00:33:14 +0000 (0:00:01.347) 0:06:33.387 ********** 2026-03-16 00:33:21.984363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:33:21.984370 | orchestrator | 2026-03-16 00:33:21.984376 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-16 00:33:21.984383 | orchestrator | Monday 16 March 2026 00:33:15 +0000 (0:00:00.983) 0:06:34.371 ********** 2026-03-16 00:33:21.984389 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:21.984396 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:33:21.984403 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:33:21.984410 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:33:21.984417 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:33:21.984423 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:33:21.984430 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:33:21.984436 | orchestrator | 2026-03-16 00:33:21.984443 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-16 00:33:21.984450 | orchestrator | Monday 16 March 2026 00:33:17 +0000 (0:00:01.393) 0:06:35.765 ********** 2026-03-16 00:33:21.984456 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:21.984463 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:33:21.984470 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:33:21.984476 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:33:21.984483 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:33:21.984502 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:33:21.984509 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:33:21.984515 | orchestrator | 2026-03-16 00:33:21.984522 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-16 00:33:21.984529 | orchestrator | Monday 16 March 2026 00:33:18 +0000 (0:00:01.159) 0:06:36.925 ********** 2026-03-16 00:33:21.984535 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:21.984542 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:33:21.984548 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:33:21.984555 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:33:21.984561 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:33:21.984568 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:33:21.984575 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:33:21.984582 | orchestrator | 2026-03-16 00:33:21.984588 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-16 00:33:21.984595 | orchestrator | Monday 16 March 2026 00:33:19 +0000 (0:00:01.102) 0:06:38.028 ********** 2026-03-16 00:33:21.984601 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:21.984608 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:33:21.984615 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:33:21.984621 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:33:21.984627 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:33:21.984633 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:33:21.984640 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:33:21.984646 | orchestrator | 2026-03-16 00:33:21.984653 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-16 00:33:21.984660 | orchestrator | Monday 16 March 2026 00:33:20 +0000 (0:00:01.344) 0:06:39.373 ********** 2026-03-16 00:33:21.984666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:33:21.984673 | orchestrator | 2026-03-16 00:33:21.984679 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-16 00:33:21.984686 | orchestrator | Monday 16 March 2026 00:33:21 +0000 (0:00:00.879) 0:06:40.253 ********** 2026-03-16 00:33:21.984692 | orchestrator | 2026-03-16 00:33:21.984699 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-16 00:33:21.984711 | orchestrator | Monday 16 March 2026 00:33:21 +0000 (0:00:00.043) 0:06:40.296 ********** 2026-03-16 00:33:21.984717 | orchestrator | 2026-03-16 00:33:21.984724 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-16 00:33:21.984730 | orchestrator | Monday 16 March 2026 00:33:21 +0000 (0:00:00.042) 0:06:40.338 ********** 2026-03-16 00:33:21.984737 | orchestrator | 2026-03-16 00:33:21.984744 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-16 00:33:21.984756 | orchestrator | Monday 16 March 2026 00:33:21 +0000 (0:00:00.049) 0:06:40.388 ********** 2026-03-16 00:33:47.589005 | orchestrator | 2026-03-16 00:33:47.589118 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-16 00:33:47.589136 | orchestrator | Monday 16 March 2026 00:33:21 +0000 (0:00:00.040) 0:06:40.429 ********** 2026-03-16 00:33:47.589149 | orchestrator | 2026-03-16 00:33:47.589160 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-16 00:33:47.589171 | orchestrator | Monday 16 March 2026 00:33:21 +0000 (0:00:00.041) 0:06:40.470 ********** 2026-03-16 00:33:47.589182 | orchestrator | 2026-03-16 00:33:47.589193 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-16 00:33:47.589204 | orchestrator | Monday 16 March 2026 00:33:21 +0000 (0:00:00.047) 0:06:40.517 ********** 2026-03-16 00:33:47.589215 | orchestrator | 2026-03-16 00:33:47.589226 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-16 00:33:47.589237 | orchestrator | Monday 16 March 2026 00:33:21 +0000 (0:00:00.039) 0:06:40.557 ********** 2026-03-16 00:33:47.589248 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:33:47.589260 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:33:47.589271 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:33:47.589282 | orchestrator | 2026-03-16 00:33:47.589293 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-16 00:33:47.589303 | orchestrator | Monday 16 March 2026 00:33:23 +0000 (0:00:01.258) 0:06:41.816 ********** 2026-03-16 00:33:47.589314 | orchestrator | changed: [testbed-manager] 2026-03-16 00:33:47.589326 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:47.589337 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:47.589348 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:47.589358 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:47.589369 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:47.589380 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:47.589390 | orchestrator | 2026-03-16 00:33:47.589401 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-16 00:33:47.589412 | orchestrator | Monday 16 March 2026 00:33:24 +0000 (0:00:01.277) 0:06:43.094 ********** 2026-03-16 00:33:47.589423 | orchestrator | changed: [testbed-manager] 2026-03-16 00:33:47.589434 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:47.589445 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:47.589455 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:47.589466 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:47.589477 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:47.589487 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:47.589498 | orchestrator | 2026-03-16 00:33:47.589509 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-16 00:33:47.589520 | orchestrator | Monday 16 March 2026 00:33:25 +0000 (0:00:01.375) 0:06:44.469 ********** 2026-03-16 00:33:47.589531 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:47.589544 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:47.589556 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:47.589569 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:47.589581 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:47.589594 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:47.589607 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:47.589618 | orchestrator | 2026-03-16 00:33:47.589629 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-16 00:33:47.589640 | orchestrator | Monday 16 March 2026 00:33:28 +0000 (0:00:02.274) 0:06:46.743 ********** 2026-03-16 00:33:47.589676 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:47.589688 | orchestrator | 2026-03-16 00:33:47.589715 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-16 00:33:47.589777 | orchestrator | Monday 16 March 2026 00:33:28 +0000 (0:00:00.098) 0:06:46.841 ********** 2026-03-16 00:33:47.589799 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:47.589817 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:47.589837 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:47.589851 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:47.589862 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:47.589872 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:47.589883 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:33:47.589894 | orchestrator | 2026-03-16 00:33:47.589905 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-16 00:33:47.589917 | orchestrator | Monday 16 March 2026 00:33:29 +0000 (0:00:00.952) 0:06:47.794 ********** 2026-03-16 00:33:47.589928 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:47.589938 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:47.589949 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:33:47.589960 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:47.589970 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:47.589981 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:47.589991 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:47.590002 | orchestrator | 2026-03-16 00:33:47.590013 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-16 00:33:47.590079 | orchestrator | Monday 16 March 2026 00:33:29 +0000 (0:00:00.442) 0:06:48.237 ********** 2026-03-16 00:33:47.590092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:33:47.590106 | orchestrator | 2026-03-16 00:33:47.590117 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-16 00:33:47.590127 | orchestrator | Monday 16 March 2026 00:33:30 +0000 (0:00:00.901) 0:06:49.139 ********** 2026-03-16 00:33:47.590139 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:47.590149 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:33:47.590161 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:33:47.590171 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:33:47.590182 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:33:47.590193 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:33:47.590204 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:33:47.590215 | orchestrator | 2026-03-16 00:33:47.590226 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-16 00:33:47.590237 | orchestrator | Monday 16 March 2026 00:33:31 +0000 (0:00:00.852) 0:06:49.992 ********** 2026-03-16 00:33:47.590248 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-16 00:33:47.590279 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-16 00:33:47.590291 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-16 00:33:47.590301 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-16 00:33:47.590312 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-16 00:33:47.590323 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-16 00:33:47.590334 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-16 00:33:47.590345 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-16 00:33:47.590355 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-16 00:33:47.590366 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-16 00:33:47.590377 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-16 00:33:47.590387 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-16 00:33:47.590413 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-16 00:33:47.590424 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-16 00:33:47.590435 | orchestrator | 2026-03-16 00:33:47.590446 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-16 00:33:47.590456 | orchestrator | Monday 16 March 2026 00:33:33 +0000 (0:00:02.408) 0:06:52.401 ********** 2026-03-16 00:33:47.590467 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:47.590478 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:47.590488 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:33:47.590499 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:47.590509 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:47.590520 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:47.590530 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:47.590541 | orchestrator | 2026-03-16 00:33:47.590552 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-16 00:33:47.590563 | orchestrator | Monday 16 March 2026 00:33:34 +0000 (0:00:00.683) 0:06:53.084 ********** 2026-03-16 00:33:47.590575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:33:47.590587 | orchestrator | 2026-03-16 00:33:47.590598 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-16 00:33:47.590609 | orchestrator | Monday 16 March 2026 00:33:35 +0000 (0:00:00.776) 0:06:53.861 ********** 2026-03-16 00:33:47.590620 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:47.590631 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:33:47.590641 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:33:47.590652 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:33:47.590663 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:33:47.590673 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:33:47.590684 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:33:47.590694 | orchestrator | 2026-03-16 00:33:47.590705 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-16 00:33:47.590716 | orchestrator | Monday 16 March 2026 00:33:36 +0000 (0:00:00.838) 0:06:54.699 ********** 2026-03-16 00:33:47.590761 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:47.590774 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:33:47.590785 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:33:47.590795 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:33:47.590806 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:33:47.590816 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:33:47.590827 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:33:47.590837 | orchestrator | 2026-03-16 00:33:47.590848 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-16 00:33:47.590859 | orchestrator | Monday 16 March 2026 00:33:37 +0000 (0:00:00.975) 0:06:55.675 ********** 2026-03-16 00:33:47.590870 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:47.590880 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:47.590891 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:33:47.590902 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:47.590913 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:47.590923 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:47.590934 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:47.590944 | orchestrator | 2026-03-16 00:33:47.590955 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-16 00:33:47.590966 | orchestrator | Monday 16 March 2026 00:33:37 +0000 (0:00:00.490) 0:06:56.166 ********** 2026-03-16 00:33:47.590976 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:47.590987 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:33:47.590997 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:33:47.591008 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:33:47.591018 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:33:47.591036 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:33:47.591047 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:33:47.591057 | orchestrator | 2026-03-16 00:33:47.591068 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-16 00:33:47.591078 | orchestrator | Monday 16 March 2026 00:33:39 +0000 (0:00:01.468) 0:06:57.634 ********** 2026-03-16 00:33:47.591089 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:33:47.591100 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:33:47.591110 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:33:47.591121 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:33:47.591132 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:33:47.591142 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:33:47.591152 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:33:47.591163 | orchestrator | 2026-03-16 00:33:47.591174 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-16 00:33:47.591184 | orchestrator | Monday 16 March 2026 00:33:39 +0000 (0:00:00.505) 0:06:58.140 ********** 2026-03-16 00:33:47.591195 | orchestrator | ok: [testbed-manager] 2026-03-16 00:33:47.591206 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:33:47.591216 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:33:47.591227 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:33:47.591237 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:33:47.591248 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:33:47.591266 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:19.864580 | orchestrator | 2026-03-16 00:34:19.864788 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-16 00:34:19.864810 | orchestrator | Monday 16 March 2026 00:33:47 +0000 (0:00:08.024) 0:07:06.164 ********** 2026-03-16 00:34:19.864823 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:19.864835 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:19.864847 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:19.864858 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:19.864868 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:19.864879 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:19.864890 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:19.864901 | orchestrator | 2026-03-16 00:34:19.864912 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-16 00:34:19.864924 | orchestrator | Monday 16 March 2026 00:33:49 +0000 (0:00:01.520) 0:07:07.685 ********** 2026-03-16 00:34:19.864935 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:19.864946 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:19.864957 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:19.864968 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:19.864979 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:19.864997 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:19.865017 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:19.865035 | orchestrator | 2026-03-16 00:34:19.865053 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-16 00:34:19.865072 | orchestrator | Monday 16 March 2026 00:33:50 +0000 (0:00:01.737) 0:07:09.423 ********** 2026-03-16 00:34:19.865090 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:19.865107 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:19.865126 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:19.865145 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:19.865164 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:19.865184 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:19.865202 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:19.865221 | orchestrator | 2026-03-16 00:34:19.865241 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-16 00:34:19.865261 | orchestrator | Monday 16 March 2026 00:33:52 +0000 (0:00:01.605) 0:07:11.029 ********** 2026-03-16 00:34:19.865281 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:19.865300 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:19.865317 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:19.865358 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:19.865371 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:19.865384 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:19.865397 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:19.865411 | orchestrator | 2026-03-16 00:34:19.865430 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-16 00:34:19.865451 | orchestrator | Monday 16 March 2026 00:33:53 +0000 (0:00:00.802) 0:07:11.832 ********** 2026-03-16 00:34:19.865467 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:34:19.865479 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:34:19.865490 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:34:19.865500 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:34:19.865511 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:34:19.865522 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:34:19.865532 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:34:19.865543 | orchestrator | 2026-03-16 00:34:19.865554 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-16 00:34:19.865565 | orchestrator | Monday 16 March 2026 00:33:54 +0000 (0:00:00.899) 0:07:12.732 ********** 2026-03-16 00:34:19.865576 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:34:19.865587 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:34:19.865597 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:34:19.865608 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:34:19.865618 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:34:19.865629 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:34:19.865640 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:34:19.865650 | orchestrator | 2026-03-16 00:34:19.865684 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-16 00:34:19.865696 | orchestrator | Monday 16 March 2026 00:33:54 +0000 (0:00:00.434) 0:07:13.166 ********** 2026-03-16 00:34:19.865707 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:19.865734 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:19.865746 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:19.865756 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:19.865767 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:19.865777 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:19.865788 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:19.865798 | orchestrator | 2026-03-16 00:34:19.865809 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-16 00:34:19.865820 | orchestrator | Monday 16 March 2026 00:33:55 +0000 (0:00:00.446) 0:07:13.613 ********** 2026-03-16 00:34:19.865831 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:19.865841 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:19.865852 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:19.865863 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:19.865873 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:19.865884 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:19.865894 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:19.865905 | orchestrator | 2026-03-16 00:34:19.865916 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-16 00:34:19.865926 | orchestrator | Monday 16 March 2026 00:33:55 +0000 (0:00:00.476) 0:07:14.089 ********** 2026-03-16 00:34:19.865937 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:19.865948 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:19.865958 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:19.865969 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:19.865979 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:19.865990 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:19.866000 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:19.866011 | orchestrator | 2026-03-16 00:34:19.866075 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-16 00:34:19.866086 | orchestrator | Monday 16 March 2026 00:33:56 +0000 (0:00:00.682) 0:07:14.772 ********** 2026-03-16 00:34:19.866097 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:19.866108 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:19.866128 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:19.866139 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:19.866150 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:19.866160 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:19.866171 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:19.866182 | orchestrator | 2026-03-16 00:34:19.866214 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-16 00:34:19.866225 | orchestrator | Monday 16 March 2026 00:34:01 +0000 (0:00:05.629) 0:07:20.402 ********** 2026-03-16 00:34:19.866236 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:34:19.866247 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:34:19.866257 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:34:19.866268 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:34:19.866279 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:34:19.866289 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:34:19.866300 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:34:19.866311 | orchestrator | 2026-03-16 00:34:19.866321 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-16 00:34:19.866332 | orchestrator | Monday 16 March 2026 00:34:02 +0000 (0:00:00.540) 0:07:20.942 ********** 2026-03-16 00:34:19.866345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:34:19.866358 | orchestrator | 2026-03-16 00:34:19.866369 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-16 00:34:19.866380 | orchestrator | Monday 16 March 2026 00:34:03 +0000 (0:00:00.988) 0:07:21.930 ********** 2026-03-16 00:34:19.866390 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:19.866401 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:19.866412 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:19.866422 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:19.866432 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:19.866443 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:19.866453 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:19.866464 | orchestrator | 2026-03-16 00:34:19.866474 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-16 00:34:19.866485 | orchestrator | Monday 16 March 2026 00:34:05 +0000 (0:00:02.134) 0:07:24.064 ********** 2026-03-16 00:34:19.866496 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:19.866506 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:19.866517 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:19.866527 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:19.866538 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:19.866548 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:19.866559 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:19.866569 | orchestrator | 2026-03-16 00:34:19.866580 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-16 00:34:19.866591 | orchestrator | Monday 16 March 2026 00:34:06 +0000 (0:00:01.110) 0:07:25.175 ********** 2026-03-16 00:34:19.866601 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:19.866612 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:19.866622 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:19.866632 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:19.866643 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:19.866653 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:19.866684 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:19.866695 | orchestrator | 2026-03-16 00:34:19.866706 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-16 00:34:19.866717 | orchestrator | Monday 16 March 2026 00:34:07 +0000 (0:00:00.831) 0:07:26.006 ********** 2026-03-16 00:34:19.866733 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-16 00:34:19.866746 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-16 00:34:19.866764 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-16 00:34:19.866775 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-16 00:34:19.866786 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-16 00:34:19.866796 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-16 00:34:19.866807 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-16 00:34:19.866818 | orchestrator | 2026-03-16 00:34:19.866828 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-16 00:34:19.866839 | orchestrator | Monday 16 March 2026 00:34:09 +0000 (0:00:01.878) 0:07:27.884 ********** 2026-03-16 00:34:19.866850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:34:19.866861 | orchestrator | 2026-03-16 00:34:19.866871 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-16 00:34:19.866882 | orchestrator | Monday 16 March 2026 00:34:10 +0000 (0:00:00.753) 0:07:28.638 ********** 2026-03-16 00:34:19.866893 | orchestrator | changed: [testbed-manager] 2026-03-16 00:34:19.866903 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:19.866914 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:19.866925 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:19.866936 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:19.866946 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:19.866957 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:19.866967 | orchestrator | 2026-03-16 00:34:19.866985 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-16 00:34:50.411065 | orchestrator | Monday 16 March 2026 00:34:19 +0000 (0:00:09.808) 0:07:38.446 ********** 2026-03-16 00:34:50.411205 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:50.411959 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:50.411991 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:50.412003 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:50.412014 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:50.412025 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:50.412036 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:50.412047 | orchestrator | 2026-03-16 00:34:50.412060 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-16 00:34:50.412072 | orchestrator | Monday 16 March 2026 00:34:21 +0000 (0:00:01.762) 0:07:40.209 ********** 2026-03-16 00:34:50.412083 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:50.412094 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:50.412105 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:50.412116 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:50.412127 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:50.412137 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:50.412149 | orchestrator | 2026-03-16 00:34:50.412160 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-16 00:34:50.412171 | orchestrator | Monday 16 March 2026 00:34:22 +0000 (0:00:01.288) 0:07:41.498 ********** 2026-03-16 00:34:50.412182 | orchestrator | changed: [testbed-manager] 2026-03-16 00:34:50.412194 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:50.412205 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:50.412216 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:50.412227 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:50.412268 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:50.412280 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:50.412290 | orchestrator | 2026-03-16 00:34:50.412301 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-16 00:34:50.412312 | orchestrator | 2026-03-16 00:34:50.412323 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-16 00:34:50.412333 | orchestrator | Monday 16 March 2026 00:34:24 +0000 (0:00:01.210) 0:07:42.708 ********** 2026-03-16 00:34:50.412344 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:34:50.412356 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:34:50.412376 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:34:50.412393 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:34:50.412410 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:34:50.412428 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:34:50.412446 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:34:50.412462 | orchestrator | 2026-03-16 00:34:50.412480 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-16 00:34:50.412500 | orchestrator | 2026-03-16 00:34:50.412519 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-16 00:34:50.412538 | orchestrator | Monday 16 March 2026 00:34:24 +0000 (0:00:00.698) 0:07:43.407 ********** 2026-03-16 00:34:50.412557 | orchestrator | changed: [testbed-manager] 2026-03-16 00:34:50.412575 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:50.412594 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:50.412673 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:50.412685 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:50.412696 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:50.412707 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:50.412718 | orchestrator | 2026-03-16 00:34:50.412730 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-16 00:34:50.412757 | orchestrator | Monday 16 March 2026 00:34:26 +0000 (0:00:01.381) 0:07:44.788 ********** 2026-03-16 00:34:50.412769 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:50.412780 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:50.412790 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:50.412801 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:50.412812 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:50.412822 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:50.412833 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:50.412844 | orchestrator | 2026-03-16 00:34:50.412862 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-16 00:34:50.412882 | orchestrator | Monday 16 March 2026 00:34:27 +0000 (0:00:01.453) 0:07:46.242 ********** 2026-03-16 00:34:50.412900 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:34:50.412920 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:34:50.412939 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:34:50.412959 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:34:50.412979 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:34:50.412999 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:34:50.413018 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:34:50.413036 | orchestrator | 2026-03-16 00:34:50.413052 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-16 00:34:50.413063 | orchestrator | Monday 16 March 2026 00:34:28 +0000 (0:00:00.520) 0:07:46.763 ********** 2026-03-16 00:34:50.413076 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:34:50.413097 | orchestrator | 2026-03-16 00:34:50.413114 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-16 00:34:50.413134 | orchestrator | Monday 16 March 2026 00:34:29 +0000 (0:00:01.092) 0:07:47.855 ********** 2026-03-16 00:34:50.413155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:34:50.413193 | orchestrator | 2026-03-16 00:34:50.413205 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-16 00:34:50.413216 | orchestrator | Monday 16 March 2026 00:34:30 +0000 (0:00:00.791) 0:07:48.646 ********** 2026-03-16 00:34:50.413227 | orchestrator | changed: [testbed-manager] 2026-03-16 00:34:50.413238 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:50.413249 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:50.413259 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:50.413270 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:50.413281 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:50.413292 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:50.413303 | orchestrator | 2026-03-16 00:34:50.413338 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-16 00:34:50.413350 | orchestrator | Monday 16 March 2026 00:34:38 +0000 (0:00:08.887) 0:07:57.534 ********** 2026-03-16 00:34:50.413361 | orchestrator | changed: [testbed-manager] 2026-03-16 00:34:50.413371 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:50.413382 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:50.413392 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:50.413403 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:50.413413 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:50.413424 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:50.413435 | orchestrator | 2026-03-16 00:34:50.413446 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-16 00:34:50.413457 | orchestrator | Monday 16 March 2026 00:34:40 +0000 (0:00:01.079) 0:07:58.613 ********** 2026-03-16 00:34:50.413467 | orchestrator | changed: [testbed-manager] 2026-03-16 00:34:50.413478 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:50.413488 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:50.413499 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:50.413509 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:50.413520 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:50.413531 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:50.413541 | orchestrator | 2026-03-16 00:34:50.413552 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-16 00:34:50.413563 | orchestrator | Monday 16 March 2026 00:34:41 +0000 (0:00:01.367) 0:07:59.981 ********** 2026-03-16 00:34:50.413574 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:50.413584 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:50.413595 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:50.413627 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:50.413638 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:50.413649 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:50.413660 | orchestrator | changed: [testbed-manager] 2026-03-16 00:34:50.413671 | orchestrator | 2026-03-16 00:34:50.413681 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-16 00:34:50.413692 | orchestrator | Monday 16 March 2026 00:34:43 +0000 (0:00:02.469) 0:08:02.450 ********** 2026-03-16 00:34:50.413703 | orchestrator | changed: [testbed-manager] 2026-03-16 00:34:50.413714 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:50.413724 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:50.413735 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:50.413745 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:50.413756 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:50.413767 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:50.413778 | orchestrator | 2026-03-16 00:34:50.413788 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-16 00:34:50.413799 | orchestrator | Monday 16 March 2026 00:34:45 +0000 (0:00:01.223) 0:08:03.673 ********** 2026-03-16 00:34:50.413810 | orchestrator | changed: [testbed-manager] 2026-03-16 00:34:50.413821 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:50.413841 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:50.413852 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:50.413863 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:50.413873 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:50.413884 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:50.413894 | orchestrator | 2026-03-16 00:34:50.413905 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-16 00:34:50.413916 | orchestrator | 2026-03-16 00:34:50.413934 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-16 00:34:50.413945 | orchestrator | Monday 16 March 2026 00:34:46 +0000 (0:00:01.061) 0:08:04.735 ********** 2026-03-16 00:34:50.413957 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:34:50.413968 | orchestrator | 2026-03-16 00:34:50.413978 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-16 00:34:50.413989 | orchestrator | Monday 16 March 2026 00:34:46 +0000 (0:00:00.697) 0:08:05.432 ********** 2026-03-16 00:34:50.414000 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:50.414011 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:50.414117 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:50.414129 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:50.414140 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:50.414151 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:50.414162 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:50.414172 | orchestrator | 2026-03-16 00:34:50.414183 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-16 00:34:50.414194 | orchestrator | Monday 16 March 2026 00:34:47 +0000 (0:00:00.931) 0:08:06.364 ********** 2026-03-16 00:34:50.414206 | orchestrator | changed: [testbed-manager] 2026-03-16 00:34:50.414216 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:50.414227 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:50.414238 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:50.414249 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:50.414260 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:50.414270 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:50.414281 | orchestrator | 2026-03-16 00:34:50.414292 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-16 00:34:50.414303 | orchestrator | Monday 16 March 2026 00:34:48 +0000 (0:00:01.019) 0:08:07.384 ********** 2026-03-16 00:34:50.414314 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:34:50.414325 | orchestrator | 2026-03-16 00:34:50.414336 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-16 00:34:50.414347 | orchestrator | Monday 16 March 2026 00:34:49 +0000 (0:00:00.830) 0:08:08.214 ********** 2026-03-16 00:34:50.414358 | orchestrator | ok: [testbed-manager] 2026-03-16 00:34:50.414380 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:34:50.414391 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:34:50.414402 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:34:50.414413 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:34:50.414423 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:34:50.414434 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:34:50.414444 | orchestrator | 2026-03-16 00:34:50.414468 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-16 00:34:51.849433 | orchestrator | Monday 16 March 2026 00:34:50 +0000 (0:00:00.776) 0:08:08.990 ********** 2026-03-16 00:34:51.849554 | orchestrator | changed: [testbed-manager] 2026-03-16 00:34:51.849573 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:34:51.849585 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:34:51.849596 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:34:51.849692 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:34:51.849704 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:34:51.849714 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:34:51.849753 | orchestrator | 2026-03-16 00:34:51.849766 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:34:51.849778 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-16 00:34:51.849791 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-16 00:34:51.849802 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-16 00:34:51.849812 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-16 00:34:51.849823 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-16 00:34:51.849833 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-16 00:34:51.849844 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-16 00:34:51.849866 | orchestrator | 2026-03-16 00:34:51.849877 | orchestrator | 2026-03-16 00:34:51.849888 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:34:51.849899 | orchestrator | Monday 16 March 2026 00:34:51 +0000 (0:00:01.158) 0:08:10.149 ********** 2026-03-16 00:34:51.849910 | orchestrator | =============================================================================== 2026-03-16 00:34:51.849921 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.85s 2026-03-16 00:34:51.849932 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.00s 2026-03-16 00:34:51.849943 | orchestrator | osism.commons.packages : Download required packages -------------------- 31.08s 2026-03-16 00:34:51.849957 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.61s 2026-03-16 00:34:51.849969 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.19s 2026-03-16 00:34:51.849998 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.77s 2026-03-16 00:34:51.850100 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.40s 2026-03-16 00:34:51.850125 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.38s 2026-03-16 00:34:51.850145 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.81s 2026-03-16 00:34:51.850165 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.31s 2026-03-16 00:34:51.850184 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.96s 2026-03-16 00:34:51.850203 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.94s 2026-03-16 00:34:51.850221 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.89s 2026-03-16 00:34:51.850240 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.81s 2026-03-16 00:34:51.850260 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.16s 2026-03-16 00:34:51.850281 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.02s 2026-03-16 00:34:51.850300 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.90s 2026-03-16 00:34:51.850319 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.39s 2026-03-16 00:34:51.850338 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.63s 2026-03-16 00:34:51.850357 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.45s 2026-03-16 00:34:52.093679 | orchestrator | + osism apply fail2ban 2026-03-16 00:35:04.399871 | orchestrator | 2026-03-16 00:35:04 | INFO  | Task 3262df13-2a48-4399-a893-57f343b52404 (fail2ban) was prepared for execution. 2026-03-16 00:35:04.399976 | orchestrator | 2026-03-16 00:35:04 | INFO  | It takes a moment until task 3262df13-2a48-4399-a893-57f343b52404 (fail2ban) has been started and output is visible here. 2026-03-16 00:35:25.180836 | orchestrator | 2026-03-16 00:35:25.180951 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-16 00:35:25.180967 | orchestrator | 2026-03-16 00:35:25.180980 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-16 00:35:25.180991 | orchestrator | Monday 16 March 2026 00:35:08 +0000 (0:00:00.252) 0:00:00.252 ********** 2026-03-16 00:35:25.181004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:35:25.181017 | orchestrator | 2026-03-16 00:35:25.181028 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-16 00:35:25.181039 | orchestrator | Monday 16 March 2026 00:35:09 +0000 (0:00:01.050) 0:00:01.303 ********** 2026-03-16 00:35:25.181050 | orchestrator | changed: [testbed-manager] 2026-03-16 00:35:25.181062 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:35:25.181073 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:35:25.181084 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:35:25.181094 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:35:25.181105 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:35:25.181115 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:35:25.181127 | orchestrator | 2026-03-16 00:35:25.181138 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-16 00:35:25.181149 | orchestrator | Monday 16 March 2026 00:35:20 +0000 (0:00:10.936) 0:00:12.239 ********** 2026-03-16 00:35:25.181160 | orchestrator | changed: [testbed-manager] 2026-03-16 00:35:25.181171 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:35:25.181182 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:35:25.181192 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:35:25.181203 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:35:25.181213 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:35:25.181224 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:35:25.181234 | orchestrator | 2026-03-16 00:35:25.181245 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-16 00:35:25.181256 | orchestrator | Monday 16 March 2026 00:35:21 +0000 (0:00:01.433) 0:00:13.673 ********** 2026-03-16 00:35:25.181267 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:35:25.181279 | orchestrator | ok: [testbed-manager] 2026-03-16 00:35:25.181289 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:35:25.181300 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:35:25.181311 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:35:25.181321 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:35:25.181332 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:35:25.181343 | orchestrator | 2026-03-16 00:35:25.181353 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-16 00:35:25.181364 | orchestrator | Monday 16 March 2026 00:35:23 +0000 (0:00:01.418) 0:00:15.091 ********** 2026-03-16 00:35:25.181375 | orchestrator | changed: [testbed-manager] 2026-03-16 00:35:25.181388 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:35:25.181401 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:35:25.181413 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:35:25.181425 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:35:25.181438 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:35:25.181450 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:35:25.181462 | orchestrator | 2026-03-16 00:35:25.181474 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:35:25.181487 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:35:25.181525 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:35:25.181579 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:35:25.181593 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:35:25.181611 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:35:25.181629 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:35:25.181648 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:35:25.181667 | orchestrator | 2026-03-16 00:35:25.181687 | orchestrator | 2026-03-16 00:35:25.181705 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:35:25.181724 | orchestrator | Monday 16 March 2026 00:35:24 +0000 (0:00:01.585) 0:00:16.677 ********** 2026-03-16 00:35:25.181738 | orchestrator | =============================================================================== 2026-03-16 00:35:25.181750 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.94s 2026-03-16 00:35:25.181762 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.59s 2026-03-16 00:35:25.181772 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.43s 2026-03-16 00:35:25.181783 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.42s 2026-03-16 00:35:25.181794 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.05s 2026-03-16 00:35:25.475037 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-16 00:35:25.475137 | orchestrator | + osism apply network 2026-03-16 00:35:37.557301 | orchestrator | 2026-03-16 00:35:37 | INFO  | Task 4be0c69c-659a-4551-99fb-bb05f1faadb8 (network) was prepared for execution. 2026-03-16 00:35:37.557419 | orchestrator | 2026-03-16 00:35:37 | INFO  | It takes a moment until task 4be0c69c-659a-4551-99fb-bb05f1faadb8 (network) has been started and output is visible here. 2026-03-16 00:36:05.744391 | orchestrator | 2026-03-16 00:36:05.744573 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-16 00:36:05.744599 | orchestrator | 2026-03-16 00:36:05.744612 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-16 00:36:05.744624 | orchestrator | Monday 16 March 2026 00:35:41 +0000 (0:00:00.245) 0:00:00.245 ********** 2026-03-16 00:36:05.744636 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:05.744648 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:36:05.744659 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:36:05.744670 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:36:05.744681 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:36:05.744691 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:36:05.744702 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:36:05.744713 | orchestrator | 2026-03-16 00:36:05.744724 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-16 00:36:05.744735 | orchestrator | Monday 16 March 2026 00:35:42 +0000 (0:00:00.688) 0:00:00.934 ********** 2026-03-16 00:36:05.744748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:36:05.744761 | orchestrator | 2026-03-16 00:36:05.744772 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-16 00:36:05.744808 | orchestrator | Monday 16 March 2026 00:35:43 +0000 (0:00:01.100) 0:00:02.034 ********** 2026-03-16 00:36:05.744820 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:05.744830 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:36:05.744841 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:36:05.744852 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:36:05.744862 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:36:05.744873 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:36:05.744884 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:36:05.744894 | orchestrator | 2026-03-16 00:36:05.744905 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-16 00:36:05.744916 | orchestrator | Monday 16 March 2026 00:35:45 +0000 (0:00:02.172) 0:00:04.207 ********** 2026-03-16 00:36:05.744927 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:05.744940 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:36:05.744954 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:36:05.744966 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:36:05.744979 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:36:05.744992 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:36:05.745004 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:36:05.745016 | orchestrator | 2026-03-16 00:36:05.745029 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-16 00:36:05.745041 | orchestrator | Monday 16 March 2026 00:35:47 +0000 (0:00:01.801) 0:00:06.009 ********** 2026-03-16 00:36:05.745054 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-16 00:36:05.745067 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-16 00:36:05.745079 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-16 00:36:05.745092 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-16 00:36:05.745105 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-16 00:36:05.745118 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-16 00:36:05.745131 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-16 00:36:05.745144 | orchestrator | 2026-03-16 00:36:05.745175 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-16 00:36:05.745193 | orchestrator | Monday 16 March 2026 00:35:48 +0000 (0:00:00.929) 0:00:06.938 ********** 2026-03-16 00:36:05.745206 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 00:36:05.745220 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 00:36:05.745233 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-16 00:36:05.745245 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-16 00:36:05.745257 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-16 00:36:05.745270 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-16 00:36:05.745283 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-16 00:36:05.745295 | orchestrator | 2026-03-16 00:36:05.745307 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-16 00:36:05.745319 | orchestrator | Monday 16 March 2026 00:35:51 +0000 (0:00:03.071) 0:00:10.009 ********** 2026-03-16 00:36:05.745329 | orchestrator | changed: [testbed-manager] 2026-03-16 00:36:05.745340 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:36:05.745350 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:36:05.745361 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:36:05.745372 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:36:05.745383 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:36:05.745393 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:36:05.745404 | orchestrator | 2026-03-16 00:36:05.745415 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-16 00:36:05.745426 | orchestrator | Monday 16 March 2026 00:35:53 +0000 (0:00:01.715) 0:00:11.725 ********** 2026-03-16 00:36:05.745437 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 00:36:05.745447 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 00:36:05.745490 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-16 00:36:05.745502 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-16 00:36:05.745521 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-16 00:36:05.745532 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-16 00:36:05.745543 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-16 00:36:05.745554 | orchestrator | 2026-03-16 00:36:05.745565 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-16 00:36:05.745576 | orchestrator | Monday 16 March 2026 00:35:54 +0000 (0:00:01.582) 0:00:13.307 ********** 2026-03-16 00:36:05.745586 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:05.745597 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:36:05.745608 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:36:05.745619 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:36:05.745630 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:36:05.745641 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:36:05.745651 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:36:05.745662 | orchestrator | 2026-03-16 00:36:05.745672 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-16 00:36:05.745702 | orchestrator | Monday 16 March 2026 00:35:55 +0000 (0:00:01.092) 0:00:14.399 ********** 2026-03-16 00:36:05.745714 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:36:05.745725 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:36:05.745736 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:36:05.745747 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:36:05.745758 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:36:05.745768 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:36:05.745779 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:36:05.745789 | orchestrator | 2026-03-16 00:36:05.745800 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-16 00:36:05.745811 | orchestrator | Monday 16 March 2026 00:35:56 +0000 (0:00:00.607) 0:00:15.007 ********** 2026-03-16 00:36:05.745822 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:05.745833 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:36:05.745844 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:36:05.745855 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:36:05.745865 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:36:05.745876 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:36:05.745886 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:36:05.745897 | orchestrator | 2026-03-16 00:36:05.745908 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-16 00:36:05.745918 | orchestrator | Monday 16 March 2026 00:35:58 +0000 (0:00:02.154) 0:00:17.162 ********** 2026-03-16 00:36:05.745929 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:36:05.745940 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:36:05.745951 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:36:05.745962 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:36:05.745973 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:36:05.745983 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:36:05.745995 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-16 00:36:05.746007 | orchestrator | 2026-03-16 00:36:05.746081 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-16 00:36:05.746096 | orchestrator | Monday 16 March 2026 00:35:59 +0000 (0:00:00.785) 0:00:17.948 ********** 2026-03-16 00:36:05.746107 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:05.746117 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:36:05.746128 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:36:05.746138 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:36:05.746149 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:36:05.746160 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:36:05.746170 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:36:05.746181 | orchestrator | 2026-03-16 00:36:05.746191 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-16 00:36:05.746202 | orchestrator | Monday 16 March 2026 00:36:01 +0000 (0:00:01.765) 0:00:19.713 ********** 2026-03-16 00:36:05.746214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:36:05.746233 | orchestrator | 2026-03-16 00:36:05.746244 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-16 00:36:05.746255 | orchestrator | Monday 16 March 2026 00:36:02 +0000 (0:00:01.296) 0:00:21.010 ********** 2026-03-16 00:36:05.746265 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:05.746276 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:36:05.746287 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:36:05.746298 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:36:05.746314 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:36:05.746325 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:36:05.746335 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:36:05.746346 | orchestrator | 2026-03-16 00:36:05.746357 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-16 00:36:05.746368 | orchestrator | Monday 16 March 2026 00:36:03 +0000 (0:00:01.250) 0:00:22.261 ********** 2026-03-16 00:36:05.746378 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:05.746389 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:36:05.746400 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:36:05.746411 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:36:05.746421 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:36:05.746432 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:36:05.746442 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:36:05.746453 | orchestrator | 2026-03-16 00:36:05.746495 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-16 00:36:05.746506 | orchestrator | Monday 16 March 2026 00:36:04 +0000 (0:00:00.650) 0:00:22.911 ********** 2026-03-16 00:36:05.746517 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-16 00:36:05.746528 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-16 00:36:05.746539 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-16 00:36:05.746550 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-16 00:36:05.746560 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-16 00:36:05.746571 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-16 00:36:05.746582 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-16 00:36:05.746593 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-16 00:36:05.746603 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-16 00:36:05.746614 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-16 00:36:05.746625 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-16 00:36:05.746635 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-16 00:36:05.746646 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-16 00:36:05.746657 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-16 00:36:05.746668 | orchestrator | 2026-03-16 00:36:05.746688 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-16 00:36:20.614248 | orchestrator | Monday 16 March 2026 00:36:05 +0000 (0:00:01.263) 0:00:24.174 ********** 2026-03-16 00:36:20.614359 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:36:20.614376 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:36:20.614388 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:36:20.614399 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:36:20.614410 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:36:20.614421 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:36:20.614521 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:36:20.614534 | orchestrator | 2026-03-16 00:36:20.614572 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-16 00:36:20.614584 | orchestrator | Monday 16 March 2026 00:36:06 +0000 (0:00:00.617) 0:00:24.792 ********** 2026-03-16 00:36:20.614596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-5, testbed-node-2, testbed-node-4 2026-03-16 00:36:20.614610 | orchestrator | 2026-03-16 00:36:20.614621 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-16 00:36:20.614632 | orchestrator | Monday 16 March 2026 00:36:10 +0000 (0:00:04.257) 0:00:29.049 ********** 2026-03-16 00:36:20.614644 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614669 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.614682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614759 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.614770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.614784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.614815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.614838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.614851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.614863 | orchestrator | 2026-03-16 00:36:20.614877 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-16 00:36:20.614890 | orchestrator | Monday 16 March 2026 00:36:15 +0000 (0:00:04.917) 0:00:33.966 ********** 2026-03-16 00:36:20.614903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614916 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.614986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.614999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-16 00:36:20.615012 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.615025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.615038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.615057 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:20.615081 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:25.533258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-16 00:36:25.533339 | orchestrator | 2026-03-16 00:36:25.533349 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-16 00:36:25.533357 | orchestrator | Monday 16 March 2026 00:36:20 +0000 (0:00:05.081) 0:00:39.048 ********** 2026-03-16 00:36:25.533365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:36:25.533372 | orchestrator | 2026-03-16 00:36:25.533379 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-16 00:36:25.533385 | orchestrator | Monday 16 March 2026 00:36:21 +0000 (0:00:00.917) 0:00:39.965 ********** 2026-03-16 00:36:25.533392 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:25.533399 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:36:25.533405 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:36:25.533411 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:36:25.533417 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:36:25.533450 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:36:25.533461 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:36:25.533473 | orchestrator | 2026-03-16 00:36:25.533484 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-16 00:36:25.533494 | orchestrator | Monday 16 March 2026 00:36:22 +0000 (0:00:00.938) 0:00:40.903 ********** 2026-03-16 00:36:25.533505 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-16 00:36:25.533512 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-16 00:36:25.533518 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-16 00:36:25.533525 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-16 00:36:25.533531 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-16 00:36:25.533537 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-16 00:36:25.533543 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-16 00:36:25.533550 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-16 00:36:25.533556 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:36:25.533563 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-16 00:36:25.533569 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-16 00:36:25.533588 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-16 00:36:25.533594 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-16 00:36:25.533601 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:36:25.533625 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-16 00:36:25.533631 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-16 00:36:25.533637 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-16 00:36:25.533643 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-16 00:36:25.533659 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:36:25.533666 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-16 00:36:25.533681 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-16 00:36:25.533687 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-16 00:36:25.533693 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-16 00:36:25.533700 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:36:25.533706 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-16 00:36:25.533712 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-16 00:36:25.533718 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-16 00:36:25.533724 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-16 00:36:25.533730 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:36:25.533737 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:36:25.533743 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-16 00:36:25.533749 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-16 00:36:25.533755 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-16 00:36:25.533761 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-16 00:36:25.533768 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:36:25.533774 | orchestrator | 2026-03-16 00:36:25.533780 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-16 00:36:25.533799 | orchestrator | Monday 16 March 2026 00:36:24 +0000 (0:00:01.680) 0:00:42.584 ********** 2026-03-16 00:36:25.533806 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:36:25.533812 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:36:25.533819 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:36:25.533827 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:36:25.533833 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:36:25.533840 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:36:25.533847 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:36:25.533854 | orchestrator | 2026-03-16 00:36:25.533861 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-16 00:36:25.533869 | orchestrator | Monday 16 March 2026 00:36:24 +0000 (0:00:00.536) 0:00:43.121 ********** 2026-03-16 00:36:25.533876 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:36:25.533882 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:36:25.533889 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:36:25.533897 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:36:25.533904 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:36:25.533911 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:36:25.533918 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:36:25.533925 | orchestrator | 2026-03-16 00:36:25.533932 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:36:25.533940 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-16 00:36:25.533949 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 00:36:25.533961 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 00:36:25.533968 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 00:36:25.533975 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 00:36:25.533982 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 00:36:25.533989 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 00:36:25.533996 | orchestrator | 2026-03-16 00:36:25.534003 | orchestrator | 2026-03-16 00:36:25.534010 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:36:25.534061 | orchestrator | Monday 16 March 2026 00:36:25 +0000 (0:00:00.597) 0:00:43.719 ********** 2026-03-16 00:36:25.534075 | orchestrator | =============================================================================== 2026-03-16 00:36:25.534083 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.08s 2026-03-16 00:36:25.534090 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.92s 2026-03-16 00:36:25.534097 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.26s 2026-03-16 00:36:25.534104 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.07s 2026-03-16 00:36:25.534111 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.17s 2026-03-16 00:36:25.534118 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.15s 2026-03-16 00:36:25.534125 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.80s 2026-03-16 00:36:25.534131 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.77s 2026-03-16 00:36:25.534138 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.72s 2026-03-16 00:36:25.534146 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.68s 2026-03-16 00:36:25.534153 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.58s 2026-03-16 00:36:25.534160 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.30s 2026-03-16 00:36:25.534167 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.26s 2026-03-16 00:36:25.534174 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.25s 2026-03-16 00:36:25.534180 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.10s 2026-03-16 00:36:25.534186 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.09s 2026-03-16 00:36:25.534192 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.94s 2026-03-16 00:36:25.534198 | orchestrator | osism.commons.network : Create required directories --------------------- 0.93s 2026-03-16 00:36:25.534204 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 0.92s 2026-03-16 00:36:25.534210 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.79s 2026-03-16 00:36:25.728989 | orchestrator | + osism apply wireguard 2026-03-16 00:36:37.747328 | orchestrator | 2026-03-16 00:36:37 | INFO  | Task 0ada2923-058f-489a-bc0e-5536233dd873 (wireguard) was prepared for execution. 2026-03-16 00:36:37.747482 | orchestrator | 2026-03-16 00:36:37 | INFO  | It takes a moment until task 0ada2923-058f-489a-bc0e-5536233dd873 (wireguard) has been started and output is visible here. 2026-03-16 00:36:56.598850 | orchestrator | 2026-03-16 00:36:56.598926 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-16 00:36:56.598953 | orchestrator | 2026-03-16 00:36:56.598959 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-16 00:36:56.598964 | orchestrator | Monday 16 March 2026 00:36:41 +0000 (0:00:00.211) 0:00:00.211 ********** 2026-03-16 00:36:56.598969 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:56.598975 | orchestrator | 2026-03-16 00:36:56.598980 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-16 00:36:56.598984 | orchestrator | Monday 16 March 2026 00:36:43 +0000 (0:00:01.367) 0:00:01.579 ********** 2026-03-16 00:36:56.598999 | orchestrator | changed: [testbed-manager] 2026-03-16 00:36:56.599007 | orchestrator | 2026-03-16 00:36:56.599012 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-16 00:36:56.599017 | orchestrator | Monday 16 March 2026 00:36:49 +0000 (0:00:06.358) 0:00:07.937 ********** 2026-03-16 00:36:56.599021 | orchestrator | changed: [testbed-manager] 2026-03-16 00:36:56.599026 | orchestrator | 2026-03-16 00:36:56.599031 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-16 00:36:56.599035 | orchestrator | Monday 16 March 2026 00:36:50 +0000 (0:00:00.548) 0:00:08.485 ********** 2026-03-16 00:36:56.599040 | orchestrator | changed: [testbed-manager] 2026-03-16 00:36:56.599044 | orchestrator | 2026-03-16 00:36:56.599049 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-16 00:36:56.599053 | orchestrator | Monday 16 March 2026 00:36:50 +0000 (0:00:00.384) 0:00:08.869 ********** 2026-03-16 00:36:56.599058 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:56.599062 | orchestrator | 2026-03-16 00:36:56.599067 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-16 00:36:56.599071 | orchestrator | Monday 16 March 2026 00:36:51 +0000 (0:00:00.558) 0:00:09.428 ********** 2026-03-16 00:36:56.599076 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:56.599080 | orchestrator | 2026-03-16 00:36:56.599085 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-16 00:36:56.599089 | orchestrator | Monday 16 March 2026 00:36:51 +0000 (0:00:00.396) 0:00:09.825 ********** 2026-03-16 00:36:56.599094 | orchestrator | ok: [testbed-manager] 2026-03-16 00:36:56.599098 | orchestrator | 2026-03-16 00:36:56.599103 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-16 00:36:56.599107 | orchestrator | Monday 16 March 2026 00:36:51 +0000 (0:00:00.385) 0:00:10.210 ********** 2026-03-16 00:36:56.599112 | orchestrator | changed: [testbed-manager] 2026-03-16 00:36:56.599116 | orchestrator | 2026-03-16 00:36:56.599121 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-16 00:36:56.599125 | orchestrator | Monday 16 March 2026 00:36:52 +0000 (0:00:01.041) 0:00:11.252 ********** 2026-03-16 00:36:56.599130 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-16 00:36:56.599134 | orchestrator | changed: [testbed-manager] 2026-03-16 00:36:56.599139 | orchestrator | 2026-03-16 00:36:56.599143 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-16 00:36:56.599148 | orchestrator | Monday 16 March 2026 00:36:53 +0000 (0:00:00.850) 0:00:12.102 ********** 2026-03-16 00:36:56.599152 | orchestrator | changed: [testbed-manager] 2026-03-16 00:36:56.599157 | orchestrator | 2026-03-16 00:36:56.599162 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-16 00:36:56.599167 | orchestrator | Monday 16 March 2026 00:36:55 +0000 (0:00:01.527) 0:00:13.630 ********** 2026-03-16 00:36:56.599171 | orchestrator | changed: [testbed-manager] 2026-03-16 00:36:56.599176 | orchestrator | 2026-03-16 00:36:56.599181 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:36:56.599186 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:36:56.599191 | orchestrator | 2026-03-16 00:36:56.599196 | orchestrator | 2026-03-16 00:36:56.599200 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:36:56.599210 | orchestrator | Monday 16 March 2026 00:36:56 +0000 (0:00:00.908) 0:00:14.538 ********** 2026-03-16 00:36:56.599214 | orchestrator | =============================================================================== 2026-03-16 00:36:56.599219 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.36s 2026-03-16 00:36:56.599223 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.53s 2026-03-16 00:36:56.599228 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.37s 2026-03-16 00:36:56.599232 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.04s 2026-03-16 00:36:56.599237 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2026-03-16 00:36:56.599241 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.85s 2026-03-16 00:36:56.599246 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2026-03-16 00:36:56.599250 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-03-16 00:36:56.599255 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-03-16 00:36:56.599259 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.39s 2026-03-16 00:36:56.599264 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2026-03-16 00:36:56.919124 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-16 00:36:56.950542 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-16 00:36:56.950636 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-16 00:36:57.026598 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 185 0 --:--:-- --:--:-- --:--:-- 186 2026-03-16 00:36:57.039340 | orchestrator | + osism apply --environment custom workarounds 2026-03-16 00:36:58.968878 | orchestrator | 2026-03-16 00:36:58 | INFO  | Trying to run play workarounds in environment custom 2026-03-16 00:37:09.061095 | orchestrator | 2026-03-16 00:37:09 | INFO  | Task 72e6e150-b17d-4835-b064-4f84af94cd5e (workarounds) was prepared for execution. 2026-03-16 00:37:09.061271 | orchestrator | 2026-03-16 00:37:09 | INFO  | It takes a moment until task 72e6e150-b17d-4835-b064-4f84af94cd5e (workarounds) has been started and output is visible here. 2026-03-16 00:37:33.947416 | orchestrator | 2026-03-16 00:37:33.947529 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 00:37:33.947546 | orchestrator | 2026-03-16 00:37:33.947558 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-16 00:37:33.947570 | orchestrator | Monday 16 March 2026 00:37:12 +0000 (0:00:00.095) 0:00:00.095 ********** 2026-03-16 00:37:33.947582 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-16 00:37:33.947594 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-16 00:37:33.947605 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-16 00:37:33.947616 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-16 00:37:33.947627 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-16 00:37:33.947637 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-16 00:37:33.947648 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-16 00:37:33.947659 | orchestrator | 2026-03-16 00:37:33.947670 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-16 00:37:33.947681 | orchestrator | 2026-03-16 00:37:33.947692 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-16 00:37:33.947703 | orchestrator | Monday 16 March 2026 00:37:13 +0000 (0:00:00.598) 0:00:00.694 ********** 2026-03-16 00:37:33.947714 | orchestrator | ok: [testbed-manager] 2026-03-16 00:37:33.947752 | orchestrator | 2026-03-16 00:37:33.947764 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-16 00:37:33.947775 | orchestrator | 2026-03-16 00:37:33.947786 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-16 00:37:33.947797 | orchestrator | Monday 16 March 2026 00:37:15 +0000 (0:00:02.076) 0:00:02.771 ********** 2026-03-16 00:37:33.947809 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:37:33.947820 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:37:33.947831 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:37:33.947841 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:37:33.947852 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:37:33.947863 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:37:33.947874 | orchestrator | 2026-03-16 00:37:33.947885 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-16 00:37:33.947897 | orchestrator | 2026-03-16 00:37:33.947910 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-16 00:37:33.947936 | orchestrator | Monday 16 March 2026 00:37:17 +0000 (0:00:01.908) 0:00:04.680 ********** 2026-03-16 00:37:33.947950 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-16 00:37:33.947963 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-16 00:37:33.947977 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-16 00:37:33.947989 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-16 00:37:33.948002 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-16 00:37:33.948014 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-16 00:37:33.948027 | orchestrator | 2026-03-16 00:37:33.948040 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-16 00:37:33.948053 | orchestrator | Monday 16 March 2026 00:37:19 +0000 (0:00:01.615) 0:00:06.295 ********** 2026-03-16 00:37:33.948066 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:37:33.948078 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:37:33.948090 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:37:33.948103 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:37:33.948116 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:37:33.948128 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:37:33.948140 | orchestrator | 2026-03-16 00:37:33.948153 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-16 00:37:33.948165 | orchestrator | Monday 16 March 2026 00:37:23 +0000 (0:00:03.942) 0:00:10.238 ********** 2026-03-16 00:37:33.948178 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:37:33.948192 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:37:33.948204 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:37:33.948217 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:37:33.948229 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:37:33.948242 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:37:33.948254 | orchestrator | 2026-03-16 00:37:33.948266 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-16 00:37:33.948279 | orchestrator | 2026-03-16 00:37:33.948292 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-16 00:37:33.948304 | orchestrator | Monday 16 March 2026 00:37:23 +0000 (0:00:00.663) 0:00:10.901 ********** 2026-03-16 00:37:33.948315 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:37:33.948345 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:37:33.948366 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:37:33.948384 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:37:33.948403 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:37:33.948421 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:37:33.948451 | orchestrator | changed: [testbed-manager] 2026-03-16 00:37:33.948470 | orchestrator | 2026-03-16 00:37:33.948482 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-16 00:37:33.948493 | orchestrator | Monday 16 March 2026 00:37:25 +0000 (0:00:01.638) 0:00:12.539 ********** 2026-03-16 00:37:33.948503 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:37:33.948514 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:37:33.948525 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:37:33.948535 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:37:33.948546 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:37:33.948557 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:37:33.948587 | orchestrator | changed: [testbed-manager] 2026-03-16 00:37:33.948598 | orchestrator | 2026-03-16 00:37:33.948609 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-16 00:37:33.948620 | orchestrator | Monday 16 March 2026 00:37:26 +0000 (0:00:01.616) 0:00:14.156 ********** 2026-03-16 00:37:33.948631 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:37:33.948642 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:37:33.948652 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:37:33.948663 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:37:33.948674 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:37:33.948684 | orchestrator | ok: [testbed-manager] 2026-03-16 00:37:33.948695 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:37:33.948705 | orchestrator | 2026-03-16 00:37:33.948716 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-16 00:37:33.948727 | orchestrator | Monday 16 March 2026 00:37:28 +0000 (0:00:01.647) 0:00:15.803 ********** 2026-03-16 00:37:33.948738 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:37:33.948748 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:37:33.948759 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:37:33.948770 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:37:33.948781 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:37:33.948791 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:37:33.948802 | orchestrator | changed: [testbed-manager] 2026-03-16 00:37:33.948813 | orchestrator | 2026-03-16 00:37:33.948823 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-16 00:37:33.948834 | orchestrator | Monday 16 March 2026 00:37:30 +0000 (0:00:01.834) 0:00:17.638 ********** 2026-03-16 00:37:33.948845 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:37:33.948855 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:37:33.948866 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:37:33.948877 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:37:33.948887 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:37:33.948898 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:37:33.948908 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:37:33.948919 | orchestrator | 2026-03-16 00:37:33.948930 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-16 00:37:33.948941 | orchestrator | 2026-03-16 00:37:33.948951 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-16 00:37:33.948962 | orchestrator | Monday 16 March 2026 00:37:31 +0000 (0:00:00.618) 0:00:18.256 ********** 2026-03-16 00:37:33.948973 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:37:33.948984 | orchestrator | ok: [testbed-manager] 2026-03-16 00:37:33.948994 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:37:33.949005 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:37:33.949016 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:37:33.949033 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:37:33.949044 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:37:33.949055 | orchestrator | 2026-03-16 00:37:33.949066 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:37:33.949078 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:37:33.949090 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:33.949108 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:33.949119 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:33.949130 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:33.949141 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:33.949151 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:33.949162 | orchestrator | 2026-03-16 00:37:33.949173 | orchestrator | 2026-03-16 00:37:33.949183 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:37:33.949194 | orchestrator | Monday 16 March 2026 00:37:33 +0000 (0:00:02.837) 0:00:21.094 ********** 2026-03-16 00:37:33.949205 | orchestrator | =============================================================================== 2026-03-16 00:37:33.949215 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.94s 2026-03-16 00:37:33.949226 | orchestrator | Install python3-docker -------------------------------------------------- 2.84s 2026-03-16 00:37:33.949237 | orchestrator | Apply netplan configuration --------------------------------------------- 2.08s 2026-03-16 00:37:33.949248 | orchestrator | Apply netplan configuration --------------------------------------------- 1.91s 2026-03-16 00:37:33.949259 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.83s 2026-03-16 00:37:33.949270 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.65s 2026-03-16 00:37:33.949280 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2026-03-16 00:37:33.949291 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.62s 2026-03-16 00:37:33.949301 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.62s 2026-03-16 00:37:33.949312 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.66s 2026-03-16 00:37:33.949343 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2026-03-16 00:37:33.949361 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.60s 2026-03-16 00:37:34.716599 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-16 00:37:46.747398 | orchestrator | 2026-03-16 00:37:46 | INFO  | Task d959787d-74ce-455a-8079-98ade7d5b234 (reboot) was prepared for execution. 2026-03-16 00:37:46.747503 | orchestrator | 2026-03-16 00:37:46 | INFO  | It takes a moment until task d959787d-74ce-455a-8079-98ade7d5b234 (reboot) has been started and output is visible here. 2026-03-16 00:37:57.089727 | orchestrator | 2026-03-16 00:37:57.089836 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-16 00:37:57.089852 | orchestrator | 2026-03-16 00:37:57.089865 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-16 00:37:57.089876 | orchestrator | Monday 16 March 2026 00:37:50 +0000 (0:00:00.203) 0:00:00.203 ********** 2026-03-16 00:37:57.089887 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:37:57.089900 | orchestrator | 2026-03-16 00:37:57.089912 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-16 00:37:57.089923 | orchestrator | Monday 16 March 2026 00:37:51 +0000 (0:00:00.102) 0:00:00.305 ********** 2026-03-16 00:37:57.089934 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:37:57.089945 | orchestrator | 2026-03-16 00:37:57.089956 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-16 00:37:57.089993 | orchestrator | Monday 16 March 2026 00:37:52 +0000 (0:00:00.947) 0:00:01.253 ********** 2026-03-16 00:37:57.090005 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:37:57.090078 | orchestrator | 2026-03-16 00:37:57.090091 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-16 00:37:57.090102 | orchestrator | 2026-03-16 00:37:57.090113 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-16 00:37:57.090124 | orchestrator | Monday 16 March 2026 00:37:52 +0000 (0:00:00.124) 0:00:01.377 ********** 2026-03-16 00:37:57.090135 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:37:57.090145 | orchestrator | 2026-03-16 00:37:57.090156 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-16 00:37:57.090167 | orchestrator | Monday 16 March 2026 00:37:52 +0000 (0:00:00.107) 0:00:01.485 ********** 2026-03-16 00:37:57.090178 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:37:57.090188 | orchestrator | 2026-03-16 00:37:57.090199 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-16 00:37:57.090224 | orchestrator | Monday 16 March 2026 00:37:52 +0000 (0:00:00.663) 0:00:02.148 ********** 2026-03-16 00:37:57.090235 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:37:57.090246 | orchestrator | 2026-03-16 00:37:57.090257 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-16 00:37:57.090270 | orchestrator | 2026-03-16 00:37:57.090283 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-16 00:37:57.090296 | orchestrator | Monday 16 March 2026 00:37:53 +0000 (0:00:00.107) 0:00:02.256 ********** 2026-03-16 00:37:57.090332 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:37:57.090345 | orchestrator | 2026-03-16 00:37:57.090358 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-16 00:37:57.090370 | orchestrator | Monday 16 March 2026 00:37:53 +0000 (0:00:00.233) 0:00:02.490 ********** 2026-03-16 00:37:57.090383 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:37:57.090396 | orchestrator | 2026-03-16 00:37:57.090408 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-16 00:37:57.090421 | orchestrator | Monday 16 March 2026 00:37:53 +0000 (0:00:00.670) 0:00:03.161 ********** 2026-03-16 00:37:57.090433 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:37:57.090445 | orchestrator | 2026-03-16 00:37:57.090458 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-16 00:37:57.090470 | orchestrator | 2026-03-16 00:37:57.090482 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-16 00:37:57.090494 | orchestrator | Monday 16 March 2026 00:37:54 +0000 (0:00:00.123) 0:00:03.284 ********** 2026-03-16 00:37:57.090506 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:37:57.090518 | orchestrator | 2026-03-16 00:37:57.090531 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-16 00:37:57.090544 | orchestrator | Monday 16 March 2026 00:37:54 +0000 (0:00:00.129) 0:00:03.413 ********** 2026-03-16 00:37:57.090556 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:37:57.090568 | orchestrator | 2026-03-16 00:37:57.090580 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-16 00:37:57.090593 | orchestrator | Monday 16 March 2026 00:37:54 +0000 (0:00:00.685) 0:00:04.099 ********** 2026-03-16 00:37:57.090605 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:37:57.090617 | orchestrator | 2026-03-16 00:37:57.090628 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-16 00:37:57.090638 | orchestrator | 2026-03-16 00:37:57.090649 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-16 00:37:57.090660 | orchestrator | Monday 16 March 2026 00:37:54 +0000 (0:00:00.110) 0:00:04.209 ********** 2026-03-16 00:37:57.090670 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:37:57.090681 | orchestrator | 2026-03-16 00:37:57.090692 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-16 00:37:57.090712 | orchestrator | Monday 16 March 2026 00:37:55 +0000 (0:00:00.102) 0:00:04.312 ********** 2026-03-16 00:37:57.090723 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:37:57.090734 | orchestrator | 2026-03-16 00:37:57.090744 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-16 00:37:57.090755 | orchestrator | Monday 16 March 2026 00:37:55 +0000 (0:00:00.636) 0:00:04.949 ********** 2026-03-16 00:37:57.090765 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:37:57.090777 | orchestrator | 2026-03-16 00:37:57.090787 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-16 00:37:57.090798 | orchestrator | 2026-03-16 00:37:57.090808 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-16 00:37:57.090819 | orchestrator | Monday 16 March 2026 00:37:55 +0000 (0:00:00.118) 0:00:05.067 ********** 2026-03-16 00:37:57.090830 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:37:57.090840 | orchestrator | 2026-03-16 00:37:57.090851 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-16 00:37:57.090862 | orchestrator | Monday 16 March 2026 00:37:55 +0000 (0:00:00.100) 0:00:05.168 ********** 2026-03-16 00:37:57.090872 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:37:57.090883 | orchestrator | 2026-03-16 00:37:57.090893 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-16 00:37:57.090904 | orchestrator | Monday 16 March 2026 00:37:56 +0000 (0:00:00.705) 0:00:05.873 ********** 2026-03-16 00:37:57.090933 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:37:57.090945 | orchestrator | 2026-03-16 00:37:57.090956 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:37:57.090968 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:57.090984 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:57.091004 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:57.091022 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:57.091039 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:57.091056 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:37:57.091073 | orchestrator | 2026-03-16 00:37:57.091091 | orchestrator | 2026-03-16 00:37:57.091107 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:37:57.091124 | orchestrator | Monday 16 March 2026 00:37:56 +0000 (0:00:00.040) 0:00:05.914 ********** 2026-03-16 00:37:57.091153 | orchestrator | =============================================================================== 2026-03-16 00:37:57.091173 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.31s 2026-03-16 00:37:57.091191 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.78s 2026-03-16 00:37:57.091210 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2026-03-16 00:37:57.453744 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-16 00:38:09.558766 | orchestrator | 2026-03-16 00:38:09 | INFO  | Task d1bfa150-fdec-4f92-b9c9-bb1cb87c7e6d (wait-for-connection) was prepared for execution. 2026-03-16 00:38:09.558863 | orchestrator | 2026-03-16 00:38:09 | INFO  | It takes a moment until task d1bfa150-fdec-4f92-b9c9-bb1cb87c7e6d (wait-for-connection) has been started and output is visible here. 2026-03-16 00:38:25.402847 | orchestrator | 2026-03-16 00:38:25.402981 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-16 00:38:25.403008 | orchestrator | 2026-03-16 00:38:25.403028 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-16 00:38:25.403047 | orchestrator | Monday 16 March 2026 00:38:13 +0000 (0:00:00.175) 0:00:00.175 ********** 2026-03-16 00:38:25.403064 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:38:25.403083 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:38:25.403101 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:38:25.403120 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:38:25.403138 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:38:25.403157 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:38:25.403175 | orchestrator | 2026-03-16 00:38:25.403195 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:38:25.403213 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:38:25.403234 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:38:25.403265 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:38:25.403332 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:38:25.403353 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:38:25.403372 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:38:25.403392 | orchestrator | 2026-03-16 00:38:25.403406 | orchestrator | 2026-03-16 00:38:25.403419 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:38:25.403432 | orchestrator | Monday 16 March 2026 00:38:24 +0000 (0:00:11.453) 0:00:11.628 ********** 2026-03-16 00:38:25.403446 | orchestrator | =============================================================================== 2026-03-16 00:38:25.403461 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.45s 2026-03-16 00:38:25.845922 | orchestrator | + osism apply hddtemp 2026-03-16 00:38:38.109058 | orchestrator | 2026-03-16 00:38:38 | INFO  | Task a7a13117-9726-464d-887a-7c08b4e1d29f (hddtemp) was prepared for execution. 2026-03-16 00:38:38.109149 | orchestrator | 2026-03-16 00:38:38 | INFO  | It takes a moment until task a7a13117-9726-464d-887a-7c08b4e1d29f (hddtemp) has been started and output is visible here. 2026-03-16 00:39:05.964394 | orchestrator | 2026-03-16 00:39:05.964521 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-16 00:39:05.964538 | orchestrator | 2026-03-16 00:39:05.964551 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-16 00:39:05.965295 | orchestrator | Monday 16 March 2026 00:38:42 +0000 (0:00:00.231) 0:00:00.231 ********** 2026-03-16 00:39:05.965324 | orchestrator | ok: [testbed-manager] 2026-03-16 00:39:05.965337 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:39:05.965348 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:39:05.965359 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:39:05.965369 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:39:05.965380 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:39:05.965391 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:39:05.965402 | orchestrator | 2026-03-16 00:39:05.965413 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-16 00:39:05.965424 | orchestrator | Monday 16 March 2026 00:38:42 +0000 (0:00:00.609) 0:00:00.841 ********** 2026-03-16 00:39:05.965437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:39:05.965476 | orchestrator | 2026-03-16 00:39:05.965488 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-16 00:39:05.965498 | orchestrator | Monday 16 March 2026 00:38:43 +0000 (0:00:01.135) 0:00:01.976 ********** 2026-03-16 00:39:05.965509 | orchestrator | ok: [testbed-manager] 2026-03-16 00:39:05.965520 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:39:05.965530 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:39:05.965540 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:39:05.965552 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:39:05.965562 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:39:05.965573 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:39:05.965583 | orchestrator | 2026-03-16 00:39:05.965594 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-16 00:39:05.965619 | orchestrator | Monday 16 March 2026 00:38:45 +0000 (0:00:01.973) 0:00:03.950 ********** 2026-03-16 00:39:05.965631 | orchestrator | changed: [testbed-manager] 2026-03-16 00:39:05.965642 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:39:05.965653 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:39:05.965664 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:39:05.965674 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:39:05.965685 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:39:05.965695 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:39:05.965706 | orchestrator | 2026-03-16 00:39:05.965716 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-16 00:39:05.965727 | orchestrator | Monday 16 March 2026 00:38:46 +0000 (0:00:01.127) 0:00:05.078 ********** 2026-03-16 00:39:05.965738 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:39:05.965748 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:39:05.965759 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:39:05.965769 | orchestrator | ok: [testbed-manager] 2026-03-16 00:39:05.965780 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:39:05.965790 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:39:05.965801 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:39:05.965811 | orchestrator | 2026-03-16 00:39:05.965822 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-16 00:39:05.965833 | orchestrator | Monday 16 March 2026 00:38:48 +0000 (0:00:01.903) 0:00:06.982 ********** 2026-03-16 00:39:05.965843 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:39:05.965854 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:39:05.965865 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:39:05.965875 | orchestrator | changed: [testbed-manager] 2026-03-16 00:39:05.965886 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:39:05.965896 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:39:05.965907 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:39:05.965917 | orchestrator | 2026-03-16 00:39:05.965928 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-16 00:39:05.965939 | orchestrator | Monday 16 March 2026 00:38:49 +0000 (0:00:00.777) 0:00:07.760 ********** 2026-03-16 00:39:05.965949 | orchestrator | changed: [testbed-manager] 2026-03-16 00:39:05.965960 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:39:05.965970 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:39:05.965981 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:39:05.965991 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:39:05.966002 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:39:05.966013 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:39:05.966082 | orchestrator | 2026-03-16 00:39:05.966093 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-16 00:39:05.966104 | orchestrator | Monday 16 March 2026 00:39:02 +0000 (0:00:13.226) 0:00:20.986 ********** 2026-03-16 00:39:05.966115 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:39:05.966135 | orchestrator | 2026-03-16 00:39:05.966146 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-16 00:39:05.966188 | orchestrator | Monday 16 March 2026 00:39:03 +0000 (0:00:01.111) 0:00:22.097 ********** 2026-03-16 00:39:05.966199 | orchestrator | changed: [testbed-manager] 2026-03-16 00:39:05.966210 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:39:05.966303 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:39:05.966315 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:39:05.966326 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:39:05.966337 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:39:05.966348 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:39:05.966359 | orchestrator | 2026-03-16 00:39:05.966370 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:39:05.966381 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:39:05.966416 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:39:05.966430 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:39:05.966450 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:39:05.966467 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:39:05.966483 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:39:05.966501 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:39:05.966518 | orchestrator | 2026-03-16 00:39:05.966537 | orchestrator | 2026-03-16 00:39:05.966554 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:39:05.966573 | orchestrator | Monday 16 March 2026 00:39:05 +0000 (0:00:01.708) 0:00:23.806 ********** 2026-03-16 00:39:05.966591 | orchestrator | =============================================================================== 2026-03-16 00:39:05.966608 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.23s 2026-03-16 00:39:05.966620 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.97s 2026-03-16 00:39:05.966630 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.90s 2026-03-16 00:39:05.966650 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.71s 2026-03-16 00:39:05.966661 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.14s 2026-03-16 00:39:05.966672 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.13s 2026-03-16 00:39:05.966682 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.11s 2026-03-16 00:39:05.966693 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.78s 2026-03-16 00:39:05.966703 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.61s 2026-03-16 00:39:06.178168 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-16 00:39:06.216198 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-16 00:39:06.216358 | orchestrator | + sudo systemctl restart manager.service 2026-03-16 00:39:19.586252 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-16 00:39:19.586324 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-16 00:39:19.586331 | orchestrator | + local max_attempts=60 2026-03-16 00:39:19.586336 | orchestrator | + local name=ceph-ansible 2026-03-16 00:39:19.586340 | orchestrator | + local attempt_num=1 2026-03-16 00:39:19.586345 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:39:19.623266 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:39:19.623324 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:39:19.623329 | orchestrator | + sleep 5 2026-03-16 00:39:24.628084 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:39:24.644781 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:39:24.644820 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:39:24.644828 | orchestrator | + sleep 5 2026-03-16 00:39:29.648824 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:39:29.683681 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:39:29.683799 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:39:29.683825 | orchestrator | + sleep 5 2026-03-16 00:39:34.688068 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:39:34.720622 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:39:34.720730 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:39:34.720746 | orchestrator | + sleep 5 2026-03-16 00:39:39.725010 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:39:39.766348 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:39:39.766452 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:39:39.766469 | orchestrator | + sleep 5 2026-03-16 00:39:44.770234 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:39:44.810217 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:39:44.810284 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:39:44.810290 | orchestrator | + sleep 5 2026-03-16 00:39:49.814927 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:39:49.854654 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:39:49.854858 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:39:49.854888 | orchestrator | + sleep 5 2026-03-16 00:39:54.861787 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:39:54.892199 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-16 00:39:54.892289 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:39:54.892297 | orchestrator | + sleep 5 2026-03-16 00:39:59.892648 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:40:00.066804 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-16 00:40:00.066913 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:40:00.066929 | orchestrator | + sleep 5 2026-03-16 00:40:05.070588 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:40:05.108408 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-16 00:40:05.108511 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:40:05.108526 | orchestrator | + sleep 5 2026-03-16 00:40:10.111798 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:40:10.149598 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-16 00:40:10.149687 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:40:10.149700 | orchestrator | + sleep 5 2026-03-16 00:40:15.154832 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:40:15.193703 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-16 00:40:15.193796 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:40:15.193808 | orchestrator | + sleep 5 2026-03-16 00:40:20.198379 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:40:20.232279 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-16 00:40:20.232370 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-16 00:40:20.232381 | orchestrator | + sleep 5 2026-03-16 00:40:25.238284 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-16 00:40:25.278418 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:40:25.278525 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-16 00:40:25.278543 | orchestrator | + local max_attempts=60 2026-03-16 00:40:25.278556 | orchestrator | + local name=kolla-ansible 2026-03-16 00:40:25.278588 | orchestrator | + local attempt_num=1 2026-03-16 00:40:25.279237 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-16 00:40:25.323011 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:40:25.323093 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-16 00:40:25.323180 | orchestrator | + local max_attempts=60 2026-03-16 00:40:25.323198 | orchestrator | + local name=osism-ansible 2026-03-16 00:40:25.323207 | orchestrator | + local attempt_num=1 2026-03-16 00:40:25.323215 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-16 00:40:25.357880 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-16 00:40:25.358145 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-16 00:40:25.358164 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-16 00:40:25.546439 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-16 00:40:25.711413 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-16 00:40:25.872928 | orchestrator | ARA in osism-ansible already disabled. 2026-03-16 00:40:26.028559 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-16 00:40:26.029646 | orchestrator | + osism apply gather-facts 2026-03-16 00:40:38.270815 | orchestrator | 2026-03-16 00:40:38 | INFO  | Task ba238057-099a-41c7-ab0a-d5125dfdeafe (gather-facts) was prepared for execution. 2026-03-16 00:40:38.270923 | orchestrator | 2026-03-16 00:40:38 | INFO  | It takes a moment until task ba238057-099a-41c7-ab0a-d5125dfdeafe (gather-facts) has been started and output is visible here. 2026-03-16 00:40:51.197629 | orchestrator | 2026-03-16 00:40:51.197774 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-16 00:40:51.197792 | orchestrator | 2026-03-16 00:40:51.197804 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-16 00:40:51.197816 | orchestrator | Monday 16 March 2026 00:40:42 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-03-16 00:40:51.197828 | orchestrator | ok: [testbed-manager] 2026-03-16 00:40:51.197841 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:40:51.197852 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:40:51.197863 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:40:51.197874 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:40:51.197884 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:40:51.197895 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:40:51.197906 | orchestrator | 2026-03-16 00:40:51.197917 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-16 00:40:51.197928 | orchestrator | 2026-03-16 00:40:51.197939 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-16 00:40:51.197950 | orchestrator | Monday 16 March 2026 00:40:50 +0000 (0:00:08.313) 0:00:08.475 ********** 2026-03-16 00:40:51.197961 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:40:51.197973 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:40:51.197984 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:40:51.197995 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:40:51.198006 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:40:51.198069 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:40:51.198084 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:40:51.198094 | orchestrator | 2026-03-16 00:40:51.198139 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:40:51.198152 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:40:51.198165 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:40:51.198177 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:40:51.198189 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:40:51.198201 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:40:51.198214 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:40:51.198258 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 00:40:51.198271 | orchestrator | 2026-03-16 00:40:51.198283 | orchestrator | 2026-03-16 00:40:51.198295 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:40:51.198308 | orchestrator | Monday 16 March 2026 00:40:50 +0000 (0:00:00.459) 0:00:08.935 ********** 2026-03-16 00:40:51.198321 | orchestrator | =============================================================================== 2026-03-16 00:40:51.198333 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.31s 2026-03-16 00:40:51.198346 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2026-03-16 00:40:51.400583 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-16 00:40:51.413357 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-16 00:40:51.423686 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-16 00:40:51.432283 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-16 00:40:51.454627 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-16 00:40:51.468721 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-16 00:40:51.480382 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-16 00:40:51.492279 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-16 00:40:51.504544 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-16 00:40:51.517278 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-16 00:40:51.537758 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-16 00:40:51.548243 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-16 00:40:51.559964 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-16 00:40:51.575905 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-16 00:40:51.587768 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-16 00:40:51.597994 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-16 00:40:51.606789 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-16 00:40:51.615161 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-16 00:40:51.625099 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-16 00:40:51.643177 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-16 00:40:51.663772 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-16 00:40:51.682235 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-16 00:40:51.695855 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-16 00:40:51.710295 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-16 00:40:51.839003 | orchestrator | ok: Runtime: 0:23:59.702246 2026-03-16 00:40:51.944003 | 2026-03-16 00:40:51.944160 | TASK [Deploy services] 2026-03-16 00:40:52.478186 | orchestrator | skipping: Conditional result was False 2026-03-16 00:40:52.495138 | 2026-03-16 00:40:52.495319 | TASK [Deploy in a nutshell] 2026-03-16 00:40:53.275622 | orchestrator | + set -e 2026-03-16 00:40:53.275799 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-16 00:40:53.275820 | orchestrator | ++ export INTERACTIVE=false 2026-03-16 00:40:53.275837 | orchestrator | ++ INTERACTIVE=false 2026-03-16 00:40:53.275848 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-16 00:40:53.275859 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-16 00:40:53.275871 | orchestrator | + source /opt/manager-vars.sh 2026-03-16 00:40:53.275912 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-16 00:40:53.275934 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-16 00:40:53.275946 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-16 00:40:53.275960 | orchestrator | ++ CEPH_VERSION=reef 2026-03-16 00:40:53.275969 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-16 00:40:53.275983 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-16 00:40:53.276004 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-16 00:40:53.276021 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-16 00:40:53.276030 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-16 00:40:53.276042 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-16 00:40:53.276051 | orchestrator | ++ export ARA=false 2026-03-16 00:40:53.276061 | orchestrator | ++ ARA=false 2026-03-16 00:40:53.276070 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-16 00:40:53.276080 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-16 00:40:53.276089 | orchestrator | ++ export TEMPEST=true 2026-03-16 00:40:53.276098 | orchestrator | ++ TEMPEST=true 2026-03-16 00:40:53.276132 | orchestrator | ++ export IS_ZUUL=true 2026-03-16 00:40:53.276142 | orchestrator | ++ IS_ZUUL=true 2026-03-16 00:40:53.276151 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 00:40:53.276160 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 00:40:53.276170 | orchestrator | 2026-03-16 00:40:53.276180 | orchestrator | # PULL IMAGES 2026-03-16 00:40:53.276189 | orchestrator | 2026-03-16 00:40:53.276198 | orchestrator | ++ export EXTERNAL_API=false 2026-03-16 00:40:53.276207 | orchestrator | ++ EXTERNAL_API=false 2026-03-16 00:40:53.276216 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-16 00:40:53.276225 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-16 00:40:53.276235 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-16 00:40:53.276243 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-16 00:40:53.276252 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-16 00:40:53.276267 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-16 00:40:53.276276 | orchestrator | + echo 2026-03-16 00:40:53.276285 | orchestrator | + echo '# PULL IMAGES' 2026-03-16 00:40:53.276292 | orchestrator | + echo 2026-03-16 00:40:53.277504 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-16 00:40:53.336310 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-16 00:40:53.336411 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-16 00:40:55.314269 | orchestrator | 2026-03-16 00:40:55 | INFO  | Trying to run play pull-images in environment custom 2026-03-16 00:41:05.405232 | orchestrator | 2026-03-16 00:41:05 | INFO  | Task ec2f6e11-1b7e-4431-9d6d-07c351e072c4 (pull-images) was prepared for execution. 2026-03-16 00:41:05.405506 | orchestrator | 2026-03-16 00:41:05 | INFO  | Task ec2f6e11-1b7e-4431-9d6d-07c351e072c4 is running in background. No more output. Check ARA for logs. 2026-03-16 00:41:07.585083 | orchestrator | 2026-03-16 00:41:07 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-16 00:41:17.805981 | orchestrator | 2026-03-16 00:41:17 | INFO  | Task 971ac616-bd59-41dd-8a22-0b8451895b57 (wipe-partitions) was prepared for execution. 2026-03-16 00:41:17.806144 | orchestrator | 2026-03-16 00:41:17 | INFO  | It takes a moment until task 971ac616-bd59-41dd-8a22-0b8451895b57 (wipe-partitions) has been started and output is visible here. 2026-03-16 00:41:30.155275 | orchestrator | 2026-03-16 00:41:30.155405 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-16 00:41:30.155431 | orchestrator | 2026-03-16 00:41:30.155449 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-16 00:41:30.155475 | orchestrator | Monday 16 March 2026 00:41:21 +0000 (0:00:00.131) 0:00:00.131 ********** 2026-03-16 00:41:30.155487 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:41:30.155497 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:41:30.155508 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:41:30.155518 | orchestrator | 2026-03-16 00:41:30.155528 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-16 00:41:30.155564 | orchestrator | Monday 16 March 2026 00:41:22 +0000 (0:00:00.540) 0:00:00.672 ********** 2026-03-16 00:41:30.155581 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:41:30.155597 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:41:30.155614 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:41:30.155633 | orchestrator | 2026-03-16 00:41:30.155650 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-16 00:41:30.155665 | orchestrator | Monday 16 March 2026 00:41:22 +0000 (0:00:00.302) 0:00:00.975 ********** 2026-03-16 00:41:30.155681 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:41:30.155699 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:41:30.155715 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:41:30.155732 | orchestrator | 2026-03-16 00:41:30.155749 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-16 00:41:30.155765 | orchestrator | Monday 16 March 2026 00:41:23 +0000 (0:00:00.557) 0:00:01.532 ********** 2026-03-16 00:41:30.155781 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:41:30.155798 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:41:30.155815 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:41:30.155833 | orchestrator | 2026-03-16 00:41:30.155845 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-16 00:41:30.155857 | orchestrator | Monday 16 March 2026 00:41:23 +0000 (0:00:00.239) 0:00:01.771 ********** 2026-03-16 00:41:30.155868 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-16 00:41:30.155884 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-16 00:41:30.155895 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-16 00:41:30.155906 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-16 00:41:30.155917 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-16 00:41:30.155928 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-16 00:41:30.155939 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-16 00:41:30.155955 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-16 00:41:30.155972 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-16 00:41:30.155988 | orchestrator | 2026-03-16 00:41:30.156005 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-16 00:41:30.156022 | orchestrator | Monday 16 March 2026 00:41:24 +0000 (0:00:01.204) 0:00:02.976 ********** 2026-03-16 00:41:30.156040 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-16 00:41:30.156058 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-16 00:41:30.156074 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-16 00:41:30.156146 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-16 00:41:30.156157 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-16 00:41:30.156167 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-16 00:41:30.156176 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-16 00:41:30.156186 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-16 00:41:30.156196 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-16 00:41:30.156205 | orchestrator | 2026-03-16 00:41:30.156215 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-16 00:41:30.156225 | orchestrator | Monday 16 March 2026 00:41:26 +0000 (0:00:01.667) 0:00:04.644 ********** 2026-03-16 00:41:30.156234 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-16 00:41:30.156244 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-16 00:41:30.156254 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-16 00:41:30.156264 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-16 00:41:30.156273 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-16 00:41:30.156283 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-16 00:41:30.156296 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-16 00:41:30.156322 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-16 00:41:30.156354 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-16 00:41:30.156372 | orchestrator | 2026-03-16 00:41:30.156388 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-16 00:41:30.156405 | orchestrator | Monday 16 March 2026 00:41:28 +0000 (0:00:02.160) 0:00:06.804 ********** 2026-03-16 00:41:30.156422 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:41:30.156440 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:41:30.156457 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:41:30.156475 | orchestrator | 2026-03-16 00:41:30.156493 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-16 00:41:30.156511 | orchestrator | Monday 16 March 2026 00:41:29 +0000 (0:00:00.623) 0:00:07.427 ********** 2026-03-16 00:41:30.156528 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:41:30.156546 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:41:30.156563 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:41:30.156581 | orchestrator | 2026-03-16 00:41:30.156600 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:41:30.156618 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:41:30.156635 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:41:30.156667 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:41:30.156677 | orchestrator | 2026-03-16 00:41:30.156687 | orchestrator | 2026-03-16 00:41:30.156697 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:41:30.156706 | orchestrator | Monday 16 March 2026 00:41:29 +0000 (0:00:00.620) 0:00:08.048 ********** 2026-03-16 00:41:30.156716 | orchestrator | =============================================================================== 2026-03-16 00:41:30.156726 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.16s 2026-03-16 00:41:30.156735 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.67s 2026-03-16 00:41:30.156745 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2026-03-16 00:41:30.156755 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-03-16 00:41:30.156764 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2026-03-16 00:41:30.156774 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.56s 2026-03-16 00:41:30.156783 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.54s 2026-03-16 00:41:30.156794 | orchestrator | Remove all rook related logical devices --------------------------------- 0.30s 2026-03-16 00:41:30.156811 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-03-16 00:41:42.201002 | orchestrator | 2026-03-16 00:41:42 | INFO  | Task e3c563c7-1ebb-4937-9897-e69b3e43a8a3 (facts) was prepared for execution. 2026-03-16 00:41:42.201135 | orchestrator | 2026-03-16 00:41:42 | INFO  | It takes a moment until task e3c563c7-1ebb-4937-9897-e69b3e43a8a3 (facts) has been started and output is visible here. 2026-03-16 00:41:53.805012 | orchestrator | 2026-03-16 00:41:53.805178 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-16 00:41:53.805193 | orchestrator | 2026-03-16 00:41:53.805204 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-16 00:41:53.805224 | orchestrator | Monday 16 March 2026 00:41:46 +0000 (0:00:00.236) 0:00:00.236 ********** 2026-03-16 00:41:53.805233 | orchestrator | ok: [testbed-manager] 2026-03-16 00:41:53.805244 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:41:53.805253 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:41:53.805262 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:41:53.805294 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:41:53.805304 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:41:53.805313 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:41:53.805321 | orchestrator | 2026-03-16 00:41:53.805330 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-16 00:41:53.805339 | orchestrator | Monday 16 March 2026 00:41:47 +0000 (0:00:00.993) 0:00:01.230 ********** 2026-03-16 00:41:53.805348 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:41:53.805357 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:41:53.805369 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:41:53.805378 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:41:53.805386 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:41:53.805395 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:41:53.805404 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:41:53.805413 | orchestrator | 2026-03-16 00:41:53.805422 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-16 00:41:53.805431 | orchestrator | 2026-03-16 00:41:53.805440 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-16 00:41:53.805448 | orchestrator | Monday 16 March 2026 00:41:48 +0000 (0:00:00.931) 0:00:02.162 ********** 2026-03-16 00:41:53.805457 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:41:53.805466 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:41:53.805475 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:41:53.805484 | orchestrator | ok: [testbed-manager] 2026-03-16 00:41:53.805493 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:41:53.805502 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:41:53.805511 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:41:53.805519 | orchestrator | 2026-03-16 00:41:53.805528 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-16 00:41:53.805537 | orchestrator | 2026-03-16 00:41:53.805546 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-16 00:41:53.805555 | orchestrator | Monday 16 March 2026 00:41:52 +0000 (0:00:04.764) 0:00:06.927 ********** 2026-03-16 00:41:53.805564 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:41:53.805572 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:41:53.805581 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:41:53.805590 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:41:53.805613 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:41:53.805622 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:41:53.805631 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:41:53.805640 | orchestrator | 2026-03-16 00:41:53.805649 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:41:53.805658 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:41:53.805668 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:41:53.805677 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:41:53.805686 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:41:53.805695 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:41:53.805704 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:41:53.805713 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:41:53.805722 | orchestrator | 2026-03-16 00:41:53.805731 | orchestrator | 2026-03-16 00:41:53.805740 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:41:53.805757 | orchestrator | Monday 16 March 2026 00:41:53 +0000 (0:00:00.499) 0:00:07.426 ********** 2026-03-16 00:41:53.805766 | orchestrator | =============================================================================== 2026-03-16 00:41:53.805775 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.76s 2026-03-16 00:41:53.805784 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.99s 2026-03-16 00:41:53.805793 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 0.93s 2026-03-16 00:41:53.805802 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-16 00:41:56.307409 | orchestrator | 2026-03-16 00:41:56 | INFO  | Task fe05183a-a4aa-4da6-926f-d82a5cf09c81 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-16 00:41:56.307511 | orchestrator | 2026-03-16 00:41:56 | INFO  | It takes a moment until task fe05183a-a4aa-4da6-926f-d82a5cf09c81 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-16 00:42:07.068645 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-16 00:42:07.068736 | orchestrator | 2.16.14 2026-03-16 00:42:07.068747 | orchestrator | 2026-03-16 00:42:07.068755 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-16 00:42:07.068763 | orchestrator | 2026-03-16 00:42:07.068771 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-16 00:42:07.068778 | orchestrator | Monday 16 March 2026 00:42:00 +0000 (0:00:00.317) 0:00:00.317 ********** 2026-03-16 00:42:07.068788 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-16 00:42:07.068794 | orchestrator | 2026-03-16 00:42:07.068800 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-16 00:42:07.068806 | orchestrator | Monday 16 March 2026 00:42:00 +0000 (0:00:00.219) 0:00:00.536 ********** 2026-03-16 00:42:07.068812 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:42:07.068819 | orchestrator | 2026-03-16 00:42:07.068826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.068833 | orchestrator | Monday 16 March 2026 00:42:01 +0000 (0:00:00.208) 0:00:00.745 ********** 2026-03-16 00:42:07.068839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-16 00:42:07.068847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-16 00:42:07.068854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-16 00:42:07.068860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-16 00:42:07.068866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-16 00:42:07.068872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-16 00:42:07.068878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-16 00:42:07.068884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-16 00:42:07.068891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-16 00:42:07.068897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-16 00:42:07.068904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-16 00:42:07.068911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-16 00:42:07.068925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-16 00:42:07.068931 | orchestrator | 2026-03-16 00:42:07.068938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.068945 | orchestrator | Monday 16 March 2026 00:42:01 +0000 (0:00:00.408) 0:00:01.153 ********** 2026-03-16 00:42:07.068970 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.068976 | orchestrator | 2026-03-16 00:42:07.068982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.068988 | orchestrator | Monday 16 March 2026 00:42:01 +0000 (0:00:00.173) 0:00:01.327 ********** 2026-03-16 00:42:07.068994 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069001 | orchestrator | 2026-03-16 00:42:07.069008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.069014 | orchestrator | Monday 16 March 2026 00:42:01 +0000 (0:00:00.172) 0:00:01.500 ********** 2026-03-16 00:42:07.069021 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069028 | orchestrator | 2026-03-16 00:42:07.069060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.069068 | orchestrator | Monday 16 March 2026 00:42:02 +0000 (0:00:00.170) 0:00:01.670 ********** 2026-03-16 00:42:07.069077 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069083 | orchestrator | 2026-03-16 00:42:07.069089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.069094 | orchestrator | Monday 16 March 2026 00:42:02 +0000 (0:00:00.177) 0:00:01.848 ********** 2026-03-16 00:42:07.069100 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069106 | orchestrator | 2026-03-16 00:42:07.069112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.069118 | orchestrator | Monday 16 March 2026 00:42:02 +0000 (0:00:00.182) 0:00:02.030 ********** 2026-03-16 00:42:07.069124 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069129 | orchestrator | 2026-03-16 00:42:07.069135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.069140 | orchestrator | Monday 16 March 2026 00:42:02 +0000 (0:00:00.172) 0:00:02.203 ********** 2026-03-16 00:42:07.069146 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069152 | orchestrator | 2026-03-16 00:42:07.069158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.069164 | orchestrator | Monday 16 March 2026 00:42:02 +0000 (0:00:00.182) 0:00:02.385 ********** 2026-03-16 00:42:07.069171 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069177 | orchestrator | 2026-03-16 00:42:07.069184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.069190 | orchestrator | Monday 16 March 2026 00:42:02 +0000 (0:00:00.197) 0:00:02.583 ********** 2026-03-16 00:42:07.069197 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa) 2026-03-16 00:42:07.069205 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa) 2026-03-16 00:42:07.069211 | orchestrator | 2026-03-16 00:42:07.069217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.069239 | orchestrator | Monday 16 March 2026 00:42:03 +0000 (0:00:00.356) 0:00:02.939 ********** 2026-03-16 00:42:07.069247 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9) 2026-03-16 00:42:07.069254 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9) 2026-03-16 00:42:07.069261 | orchestrator | 2026-03-16 00:42:07.069268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.069275 | orchestrator | Monday 16 March 2026 00:42:03 +0000 (0:00:00.487) 0:00:03.427 ********** 2026-03-16 00:42:07.069281 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c) 2026-03-16 00:42:07.069288 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c) 2026-03-16 00:42:07.069295 | orchestrator | 2026-03-16 00:42:07.069302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.069310 | orchestrator | Monday 16 March 2026 00:42:04 +0000 (0:00:00.519) 0:00:03.946 ********** 2026-03-16 00:42:07.069325 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f) 2026-03-16 00:42:07.069331 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f) 2026-03-16 00:42:07.069337 | orchestrator | 2026-03-16 00:42:07.069344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:07.069350 | orchestrator | Monday 16 March 2026 00:42:04 +0000 (0:00:00.639) 0:00:04.586 ********** 2026-03-16 00:42:07.069358 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-16 00:42:07.069364 | orchestrator | 2026-03-16 00:42:07.069371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:07.069378 | orchestrator | Monday 16 March 2026 00:42:05 +0000 (0:00:00.303) 0:00:04.889 ********** 2026-03-16 00:42:07.069390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-16 00:42:07.069397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-16 00:42:07.069404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-16 00:42:07.069410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-16 00:42:07.069416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-16 00:42:07.069423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-16 00:42:07.069429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-16 00:42:07.069435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-16 00:42:07.069442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-16 00:42:07.069448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-16 00:42:07.069455 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-16 00:42:07.069461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-16 00:42:07.069466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-16 00:42:07.069472 | orchestrator | 2026-03-16 00:42:07.069478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:07.069483 | orchestrator | Monday 16 March 2026 00:42:05 +0000 (0:00:00.379) 0:00:05.269 ********** 2026-03-16 00:42:07.069489 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069496 | orchestrator | 2026-03-16 00:42:07.069502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:07.069509 | orchestrator | Monday 16 March 2026 00:42:05 +0000 (0:00:00.221) 0:00:05.490 ********** 2026-03-16 00:42:07.069516 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069523 | orchestrator | 2026-03-16 00:42:07.069529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:07.069536 | orchestrator | Monday 16 March 2026 00:42:06 +0000 (0:00:00.206) 0:00:05.697 ********** 2026-03-16 00:42:07.069543 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069549 | orchestrator | 2026-03-16 00:42:07.069555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:07.069560 | orchestrator | Monday 16 March 2026 00:42:06 +0000 (0:00:00.197) 0:00:05.894 ********** 2026-03-16 00:42:07.069566 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069572 | orchestrator | 2026-03-16 00:42:07.069579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:07.069586 | orchestrator | Monday 16 March 2026 00:42:06 +0000 (0:00:00.197) 0:00:06.091 ********** 2026-03-16 00:42:07.069592 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069605 | orchestrator | 2026-03-16 00:42:07.069612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:07.069619 | orchestrator | Monday 16 March 2026 00:42:06 +0000 (0:00:00.181) 0:00:06.272 ********** 2026-03-16 00:42:07.069626 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069633 | orchestrator | 2026-03-16 00:42:07.069640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:07.069646 | orchestrator | Monday 16 March 2026 00:42:06 +0000 (0:00:00.192) 0:00:06.465 ********** 2026-03-16 00:42:07.069652 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:07.069658 | orchestrator | 2026-03-16 00:42:07.069671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:14.776654 | orchestrator | Monday 16 March 2026 00:42:07 +0000 (0:00:00.189) 0:00:06.655 ********** 2026-03-16 00:42:14.776754 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.776768 | orchestrator | 2026-03-16 00:42:14.776779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:14.776788 | orchestrator | Monday 16 March 2026 00:42:07 +0000 (0:00:00.200) 0:00:06.856 ********** 2026-03-16 00:42:14.776798 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-16 00:42:14.776807 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-16 00:42:14.776817 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-16 00:42:14.776828 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-16 00:42:14.776847 | orchestrator | 2026-03-16 00:42:14.776869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:14.776883 | orchestrator | Monday 16 March 2026 00:42:08 +0000 (0:00:01.077) 0:00:07.933 ********** 2026-03-16 00:42:14.776897 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.776918 | orchestrator | 2026-03-16 00:42:14.776934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:14.776946 | orchestrator | Monday 16 March 2026 00:42:08 +0000 (0:00:00.194) 0:00:08.128 ********** 2026-03-16 00:42:14.776960 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.776974 | orchestrator | 2026-03-16 00:42:14.776987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:14.777000 | orchestrator | Monday 16 March 2026 00:42:08 +0000 (0:00:00.186) 0:00:08.314 ********** 2026-03-16 00:42:14.777013 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.777087 | orchestrator | 2026-03-16 00:42:14.777104 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:14.777117 | orchestrator | Monday 16 March 2026 00:42:08 +0000 (0:00:00.202) 0:00:08.517 ********** 2026-03-16 00:42:14.777130 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.777143 | orchestrator | 2026-03-16 00:42:14.777155 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-16 00:42:14.777168 | orchestrator | Monday 16 March 2026 00:42:09 +0000 (0:00:00.193) 0:00:08.711 ********** 2026-03-16 00:42:14.777181 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-16 00:42:14.777195 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-16 00:42:14.777209 | orchestrator | 2026-03-16 00:42:14.777223 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-16 00:42:14.777237 | orchestrator | Monday 16 March 2026 00:42:09 +0000 (0:00:00.178) 0:00:08.889 ********** 2026-03-16 00:42:14.777252 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.777266 | orchestrator | 2026-03-16 00:42:14.777281 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-16 00:42:14.777320 | orchestrator | Monday 16 March 2026 00:42:09 +0000 (0:00:00.139) 0:00:09.029 ********** 2026-03-16 00:42:14.777337 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.777351 | orchestrator | 2026-03-16 00:42:14.777365 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-16 00:42:14.777378 | orchestrator | Monday 16 March 2026 00:42:09 +0000 (0:00:00.148) 0:00:09.177 ********** 2026-03-16 00:42:14.777420 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.777437 | orchestrator | 2026-03-16 00:42:14.777451 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-16 00:42:14.777465 | orchestrator | Monday 16 March 2026 00:42:09 +0000 (0:00:00.133) 0:00:09.311 ********** 2026-03-16 00:42:14.777479 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:42:14.777494 | orchestrator | 2026-03-16 00:42:14.777509 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-16 00:42:14.777523 | orchestrator | Monday 16 March 2026 00:42:09 +0000 (0:00:00.140) 0:00:09.452 ********** 2026-03-16 00:42:14.777566 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71e0430a-6bf1-53ec-905e-7c884e89f784'}}) 2026-03-16 00:42:14.777584 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '40b418b1-0bd6-568c-82b5-8ddc4abd3365'}}) 2026-03-16 00:42:14.777598 | orchestrator | 2026-03-16 00:42:14.777612 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-16 00:42:14.777627 | orchestrator | Monday 16 March 2026 00:42:10 +0000 (0:00:00.175) 0:00:09.628 ********** 2026-03-16 00:42:14.777642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71e0430a-6bf1-53ec-905e-7c884e89f784'}})  2026-03-16 00:42:14.777666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '40b418b1-0bd6-568c-82b5-8ddc4abd3365'}})  2026-03-16 00:42:14.777681 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.777696 | orchestrator | 2026-03-16 00:42:14.777708 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-16 00:42:14.777717 | orchestrator | Monday 16 March 2026 00:42:10 +0000 (0:00:00.153) 0:00:09.781 ********** 2026-03-16 00:42:14.777726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71e0430a-6bf1-53ec-905e-7c884e89f784'}})  2026-03-16 00:42:14.777735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '40b418b1-0bd6-568c-82b5-8ddc4abd3365'}})  2026-03-16 00:42:14.777819 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.777835 | orchestrator | 2026-03-16 00:42:14.777854 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-16 00:42:14.777873 | orchestrator | Monday 16 March 2026 00:42:10 +0000 (0:00:00.373) 0:00:10.154 ********** 2026-03-16 00:42:14.777888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71e0430a-6bf1-53ec-905e-7c884e89f784'}})  2026-03-16 00:42:14.777928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '40b418b1-0bd6-568c-82b5-8ddc4abd3365'}})  2026-03-16 00:42:14.777944 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.777959 | orchestrator | 2026-03-16 00:42:14.777974 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-16 00:42:14.777991 | orchestrator | Monday 16 March 2026 00:42:10 +0000 (0:00:00.154) 0:00:10.309 ********** 2026-03-16 00:42:14.778007 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:42:14.778216 | orchestrator | 2026-03-16 00:42:14.778939 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-16 00:42:14.778988 | orchestrator | Monday 16 March 2026 00:42:10 +0000 (0:00:00.155) 0:00:10.465 ********** 2026-03-16 00:42:14.778994 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:42:14.778999 | orchestrator | 2026-03-16 00:42:14.779013 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-16 00:42:14.779017 | orchestrator | Monday 16 March 2026 00:42:11 +0000 (0:00:00.155) 0:00:10.621 ********** 2026-03-16 00:42:14.779021 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.779047 | orchestrator | 2026-03-16 00:42:14.779052 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-16 00:42:14.779056 | orchestrator | Monday 16 March 2026 00:42:11 +0000 (0:00:00.128) 0:00:10.750 ********** 2026-03-16 00:42:14.779069 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.779073 | orchestrator | 2026-03-16 00:42:14.779077 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-16 00:42:14.779081 | orchestrator | Monday 16 March 2026 00:42:11 +0000 (0:00:00.152) 0:00:10.902 ********** 2026-03-16 00:42:14.779085 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.779089 | orchestrator | 2026-03-16 00:42:14.779093 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-16 00:42:14.779097 | orchestrator | Monday 16 March 2026 00:42:11 +0000 (0:00:00.135) 0:00:11.038 ********** 2026-03-16 00:42:14.779101 | orchestrator | ok: [testbed-node-3] => { 2026-03-16 00:42:14.779105 | orchestrator |  "ceph_osd_devices": { 2026-03-16 00:42:14.779109 | orchestrator |  "sdb": { 2026-03-16 00:42:14.779113 | orchestrator |  "osd_lvm_uuid": "71e0430a-6bf1-53ec-905e-7c884e89f784" 2026-03-16 00:42:14.779117 | orchestrator |  }, 2026-03-16 00:42:14.779121 | orchestrator |  "sdc": { 2026-03-16 00:42:14.779124 | orchestrator |  "osd_lvm_uuid": "40b418b1-0bd6-568c-82b5-8ddc4abd3365" 2026-03-16 00:42:14.779128 | orchestrator |  } 2026-03-16 00:42:14.779132 | orchestrator |  } 2026-03-16 00:42:14.779136 | orchestrator | } 2026-03-16 00:42:14.779140 | orchestrator | 2026-03-16 00:42:14.779144 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-16 00:42:14.779148 | orchestrator | Monday 16 March 2026 00:42:11 +0000 (0:00:00.157) 0:00:11.195 ********** 2026-03-16 00:42:14.779152 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.779156 | orchestrator | 2026-03-16 00:42:14.779160 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-16 00:42:14.779163 | orchestrator | Monday 16 March 2026 00:42:11 +0000 (0:00:00.141) 0:00:11.337 ********** 2026-03-16 00:42:14.779167 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.779171 | orchestrator | 2026-03-16 00:42:14.779175 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-16 00:42:14.779179 | orchestrator | Monday 16 March 2026 00:42:11 +0000 (0:00:00.143) 0:00:11.480 ********** 2026-03-16 00:42:14.779182 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:42:14.779186 | orchestrator | 2026-03-16 00:42:14.779190 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-16 00:42:14.779194 | orchestrator | Monday 16 March 2026 00:42:12 +0000 (0:00:00.146) 0:00:11.627 ********** 2026-03-16 00:42:14.779197 | orchestrator | changed: [testbed-node-3] => { 2026-03-16 00:42:14.779201 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-16 00:42:14.779205 | orchestrator |  "ceph_osd_devices": { 2026-03-16 00:42:14.779209 | orchestrator |  "sdb": { 2026-03-16 00:42:14.779213 | orchestrator |  "osd_lvm_uuid": "71e0430a-6bf1-53ec-905e-7c884e89f784" 2026-03-16 00:42:14.779217 | orchestrator |  }, 2026-03-16 00:42:14.779221 | orchestrator |  "sdc": { 2026-03-16 00:42:14.779224 | orchestrator |  "osd_lvm_uuid": "40b418b1-0bd6-568c-82b5-8ddc4abd3365" 2026-03-16 00:42:14.779228 | orchestrator |  } 2026-03-16 00:42:14.779232 | orchestrator |  }, 2026-03-16 00:42:14.779236 | orchestrator |  "lvm_volumes": [ 2026-03-16 00:42:14.779240 | orchestrator |  { 2026-03-16 00:42:14.779244 | orchestrator |  "data": "osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784", 2026-03-16 00:42:14.779248 | orchestrator |  "data_vg": "ceph-71e0430a-6bf1-53ec-905e-7c884e89f784" 2026-03-16 00:42:14.779252 | orchestrator |  }, 2026-03-16 00:42:14.779255 | orchestrator |  { 2026-03-16 00:42:14.779259 | orchestrator |  "data": "osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365", 2026-03-16 00:42:14.779263 | orchestrator |  "data_vg": "ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365" 2026-03-16 00:42:14.779267 | orchestrator |  } 2026-03-16 00:42:14.779271 | orchestrator |  ] 2026-03-16 00:42:14.779274 | orchestrator |  } 2026-03-16 00:42:14.779278 | orchestrator | } 2026-03-16 00:42:14.779285 | orchestrator | 2026-03-16 00:42:14.779289 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-16 00:42:14.779293 | orchestrator | Monday 16 March 2026 00:42:12 +0000 (0:00:00.457) 0:00:12.085 ********** 2026-03-16 00:42:14.779297 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-16 00:42:14.779300 | orchestrator | 2026-03-16 00:42:14.779310 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-16 00:42:14.779314 | orchestrator | 2026-03-16 00:42:14.779318 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-16 00:42:14.779322 | orchestrator | Monday 16 March 2026 00:42:14 +0000 (0:00:01.790) 0:00:13.875 ********** 2026-03-16 00:42:14.779326 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-16 00:42:14.779330 | orchestrator | 2026-03-16 00:42:14.779333 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-16 00:42:14.779337 | orchestrator | Monday 16 March 2026 00:42:14 +0000 (0:00:00.259) 0:00:14.135 ********** 2026-03-16 00:42:14.779344 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:42:14.779351 | orchestrator | 2026-03-16 00:42:14.779369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.664561 | orchestrator | Monday 16 March 2026 00:42:14 +0000 (0:00:00.229) 0:00:14.364 ********** 2026-03-16 00:42:23.664671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-16 00:42:23.664688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-16 00:42:23.664700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-16 00:42:23.664711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-16 00:42:23.664722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-16 00:42:23.664733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-16 00:42:23.664744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-16 00:42:23.664755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-16 00:42:23.664766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-16 00:42:23.664777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-16 00:42:23.664788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-16 00:42:23.664799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-16 00:42:23.664814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-16 00:42:23.664826 | orchestrator | 2026-03-16 00:42:23.664838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.664849 | orchestrator | Monday 16 March 2026 00:42:15 +0000 (0:00:00.388) 0:00:14.752 ********** 2026-03-16 00:42:23.664860 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.664873 | orchestrator | 2026-03-16 00:42:23.664884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.664895 | orchestrator | Monday 16 March 2026 00:42:15 +0000 (0:00:00.199) 0:00:14.952 ********** 2026-03-16 00:42:23.664906 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.664917 | orchestrator | 2026-03-16 00:42:23.664929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.664940 | orchestrator | Monday 16 March 2026 00:42:15 +0000 (0:00:00.188) 0:00:15.141 ********** 2026-03-16 00:42:23.664951 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.664962 | orchestrator | 2026-03-16 00:42:23.664973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.664984 | orchestrator | Monday 16 March 2026 00:42:15 +0000 (0:00:00.189) 0:00:15.330 ********** 2026-03-16 00:42:23.665074 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.665088 | orchestrator | 2026-03-16 00:42:23.665099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.665110 | orchestrator | Monday 16 March 2026 00:42:15 +0000 (0:00:00.184) 0:00:15.514 ********** 2026-03-16 00:42:23.665120 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.665131 | orchestrator | 2026-03-16 00:42:23.665142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.665153 | orchestrator | Monday 16 March 2026 00:42:16 +0000 (0:00:00.711) 0:00:16.225 ********** 2026-03-16 00:42:23.665164 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.665174 | orchestrator | 2026-03-16 00:42:23.665185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.665196 | orchestrator | Monday 16 March 2026 00:42:16 +0000 (0:00:00.235) 0:00:16.460 ********** 2026-03-16 00:42:23.665207 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.665218 | orchestrator | 2026-03-16 00:42:23.665229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.665253 | orchestrator | Monday 16 March 2026 00:42:17 +0000 (0:00:00.214) 0:00:16.675 ********** 2026-03-16 00:42:23.665276 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.665287 | orchestrator | 2026-03-16 00:42:23.665314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.665326 | orchestrator | Monday 16 March 2026 00:42:17 +0000 (0:00:00.211) 0:00:16.886 ********** 2026-03-16 00:42:23.665336 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5) 2026-03-16 00:42:23.665349 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5) 2026-03-16 00:42:23.665360 | orchestrator | 2026-03-16 00:42:23.665371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.665382 | orchestrator | Monday 16 March 2026 00:42:17 +0000 (0:00:00.424) 0:00:17.310 ********** 2026-03-16 00:42:23.665393 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358) 2026-03-16 00:42:23.665404 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358) 2026-03-16 00:42:23.665415 | orchestrator | 2026-03-16 00:42:23.665426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.665437 | orchestrator | Monday 16 March 2026 00:42:18 +0000 (0:00:00.553) 0:00:17.863 ********** 2026-03-16 00:42:23.665448 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649) 2026-03-16 00:42:23.665459 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649) 2026-03-16 00:42:23.665470 | orchestrator | 2026-03-16 00:42:23.665481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.665510 | orchestrator | Monday 16 March 2026 00:42:18 +0000 (0:00:00.485) 0:00:18.349 ********** 2026-03-16 00:42:23.665522 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22) 2026-03-16 00:42:23.665533 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22) 2026-03-16 00:42:23.665544 | orchestrator | 2026-03-16 00:42:23.665555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:23.665566 | orchestrator | Monday 16 March 2026 00:42:19 +0000 (0:00:00.443) 0:00:18.793 ********** 2026-03-16 00:42:23.665577 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-16 00:42:23.665588 | orchestrator | 2026-03-16 00:42:23.665599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:23.665610 | orchestrator | Monday 16 March 2026 00:42:19 +0000 (0:00:00.340) 0:00:19.133 ********** 2026-03-16 00:42:23.665621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-16 00:42:23.665642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-16 00:42:23.665653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-16 00:42:23.665664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-16 00:42:23.665674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-16 00:42:23.665685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-16 00:42:23.665696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-16 00:42:23.665707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-16 00:42:23.665717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-16 00:42:23.665728 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-16 00:42:23.665739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-16 00:42:23.665750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-16 00:42:23.665760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-16 00:42:23.665771 | orchestrator | 2026-03-16 00:42:23.665782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:23.665793 | orchestrator | Monday 16 March 2026 00:42:19 +0000 (0:00:00.375) 0:00:19.508 ********** 2026-03-16 00:42:23.665804 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.665815 | orchestrator | 2026-03-16 00:42:23.665826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:23.665837 | orchestrator | Monday 16 March 2026 00:42:20 +0000 (0:00:00.892) 0:00:20.401 ********** 2026-03-16 00:42:23.665848 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.665859 | orchestrator | 2026-03-16 00:42:23.665870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:23.665881 | orchestrator | Monday 16 March 2026 00:42:21 +0000 (0:00:00.260) 0:00:20.662 ********** 2026-03-16 00:42:23.665892 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.665902 | orchestrator | 2026-03-16 00:42:23.665913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:23.665924 | orchestrator | Monday 16 March 2026 00:42:21 +0000 (0:00:00.247) 0:00:20.909 ********** 2026-03-16 00:42:23.665941 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.665952 | orchestrator | 2026-03-16 00:42:23.665963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:23.665974 | orchestrator | Monday 16 March 2026 00:42:21 +0000 (0:00:00.212) 0:00:21.122 ********** 2026-03-16 00:42:23.665985 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.665996 | orchestrator | 2026-03-16 00:42:23.666007 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:23.666095 | orchestrator | Monday 16 March 2026 00:42:21 +0000 (0:00:00.230) 0:00:21.352 ********** 2026-03-16 00:42:23.666108 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.666119 | orchestrator | 2026-03-16 00:42:23.666130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:23.666141 | orchestrator | Monday 16 March 2026 00:42:22 +0000 (0:00:00.240) 0:00:21.593 ********** 2026-03-16 00:42:23.666151 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.666162 | orchestrator | 2026-03-16 00:42:23.666173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:23.666184 | orchestrator | Monday 16 March 2026 00:42:22 +0000 (0:00:00.232) 0:00:21.825 ********** 2026-03-16 00:42:23.666194 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:23.666213 | orchestrator | 2026-03-16 00:42:23.666224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:23.666235 | orchestrator | Monday 16 March 2026 00:42:22 +0000 (0:00:00.232) 0:00:22.057 ********** 2026-03-16 00:42:23.666246 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-16 00:42:23.666257 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-16 00:42:23.666269 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-16 00:42:23.666279 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-16 00:42:23.666290 | orchestrator | 2026-03-16 00:42:23.666301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:23.666312 | orchestrator | Monday 16 March 2026 00:42:23 +0000 (0:00:00.959) 0:00:23.017 ********** 2026-03-16 00:42:23.666323 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.615095 | orchestrator | 2026-03-16 00:42:30.615205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:30.615223 | orchestrator | Monday 16 March 2026 00:42:23 +0000 (0:00:00.231) 0:00:23.249 ********** 2026-03-16 00:42:30.615235 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.615247 | orchestrator | 2026-03-16 00:42:30.615258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:30.615270 | orchestrator | Monday 16 March 2026 00:42:23 +0000 (0:00:00.228) 0:00:23.477 ********** 2026-03-16 00:42:30.615281 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.615292 | orchestrator | 2026-03-16 00:42:30.615303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:30.615314 | orchestrator | Monday 16 March 2026 00:42:24 +0000 (0:00:00.212) 0:00:23.690 ********** 2026-03-16 00:42:30.615325 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.615336 | orchestrator | 2026-03-16 00:42:30.615347 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-16 00:42:30.615365 | orchestrator | Monday 16 March 2026 00:42:24 +0000 (0:00:00.809) 0:00:24.500 ********** 2026-03-16 00:42:30.615389 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-16 00:42:30.615416 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-16 00:42:30.615434 | orchestrator | 2026-03-16 00:42:30.615451 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-16 00:42:30.615468 | orchestrator | Monday 16 March 2026 00:42:25 +0000 (0:00:00.164) 0:00:24.665 ********** 2026-03-16 00:42:30.615486 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.615501 | orchestrator | 2026-03-16 00:42:30.615516 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-16 00:42:30.615533 | orchestrator | Monday 16 March 2026 00:42:25 +0000 (0:00:00.147) 0:00:24.812 ********** 2026-03-16 00:42:30.615549 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.615567 | orchestrator | 2026-03-16 00:42:30.615600 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-16 00:42:30.615623 | orchestrator | Monday 16 March 2026 00:42:25 +0000 (0:00:00.142) 0:00:24.955 ********** 2026-03-16 00:42:30.615644 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.615665 | orchestrator | 2026-03-16 00:42:30.615685 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-16 00:42:30.615706 | orchestrator | Monday 16 March 2026 00:42:25 +0000 (0:00:00.135) 0:00:25.091 ********** 2026-03-16 00:42:30.615726 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:42:30.615747 | orchestrator | 2026-03-16 00:42:30.615768 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-16 00:42:30.615789 | orchestrator | Monday 16 March 2026 00:42:25 +0000 (0:00:00.134) 0:00:25.225 ********** 2026-03-16 00:42:30.615807 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ded6401a-969b-5c16-b1be-1b69fe43ded8'}}) 2026-03-16 00:42:30.615819 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '01ad088d-533b-5bd8-92eb-284afc0ad32d'}}) 2026-03-16 00:42:30.615869 | orchestrator | 2026-03-16 00:42:30.615882 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-16 00:42:30.615894 | orchestrator | Monday 16 March 2026 00:42:25 +0000 (0:00:00.178) 0:00:25.404 ********** 2026-03-16 00:42:30.615906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ded6401a-969b-5c16-b1be-1b69fe43ded8'}})  2026-03-16 00:42:30.615918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '01ad088d-533b-5bd8-92eb-284afc0ad32d'}})  2026-03-16 00:42:30.615929 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.615940 | orchestrator | 2026-03-16 00:42:30.615951 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-16 00:42:30.615962 | orchestrator | Monday 16 March 2026 00:42:25 +0000 (0:00:00.149) 0:00:25.553 ********** 2026-03-16 00:42:30.615973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ded6401a-969b-5c16-b1be-1b69fe43ded8'}})  2026-03-16 00:42:30.616099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '01ad088d-533b-5bd8-92eb-284afc0ad32d'}})  2026-03-16 00:42:30.616116 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.616127 | orchestrator | 2026-03-16 00:42:30.616138 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-16 00:42:30.616149 | orchestrator | Monday 16 March 2026 00:42:26 +0000 (0:00:00.162) 0:00:25.716 ********** 2026-03-16 00:42:30.616159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ded6401a-969b-5c16-b1be-1b69fe43ded8'}})  2026-03-16 00:42:30.616171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '01ad088d-533b-5bd8-92eb-284afc0ad32d'}})  2026-03-16 00:42:30.616182 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.616193 | orchestrator | 2026-03-16 00:42:30.616204 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-16 00:42:30.616215 | orchestrator | Monday 16 March 2026 00:42:26 +0000 (0:00:00.157) 0:00:25.874 ********** 2026-03-16 00:42:30.616225 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:42:30.616236 | orchestrator | 2026-03-16 00:42:30.616246 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-16 00:42:30.616257 | orchestrator | Monday 16 March 2026 00:42:26 +0000 (0:00:00.145) 0:00:26.020 ********** 2026-03-16 00:42:30.616268 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:42:30.616278 | orchestrator | 2026-03-16 00:42:30.616289 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-16 00:42:30.616300 | orchestrator | Monday 16 March 2026 00:42:26 +0000 (0:00:00.145) 0:00:26.166 ********** 2026-03-16 00:42:30.616332 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.616343 | orchestrator | 2026-03-16 00:42:30.616354 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-16 00:42:30.616365 | orchestrator | Monday 16 March 2026 00:42:27 +0000 (0:00:00.449) 0:00:26.615 ********** 2026-03-16 00:42:30.616376 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.616387 | orchestrator | 2026-03-16 00:42:30.616397 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-16 00:42:30.616408 | orchestrator | Monday 16 March 2026 00:42:27 +0000 (0:00:00.127) 0:00:26.743 ********** 2026-03-16 00:42:30.616419 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.616430 | orchestrator | 2026-03-16 00:42:30.616440 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-16 00:42:30.616451 | orchestrator | Monday 16 March 2026 00:42:27 +0000 (0:00:00.138) 0:00:26.881 ********** 2026-03-16 00:42:30.616462 | orchestrator | ok: [testbed-node-4] => { 2026-03-16 00:42:30.616473 | orchestrator |  "ceph_osd_devices": { 2026-03-16 00:42:30.616483 | orchestrator |  "sdb": { 2026-03-16 00:42:30.616495 | orchestrator |  "osd_lvm_uuid": "ded6401a-969b-5c16-b1be-1b69fe43ded8" 2026-03-16 00:42:30.616506 | orchestrator |  }, 2026-03-16 00:42:30.616530 | orchestrator |  "sdc": { 2026-03-16 00:42:30.616541 | orchestrator |  "osd_lvm_uuid": "01ad088d-533b-5bd8-92eb-284afc0ad32d" 2026-03-16 00:42:30.616551 | orchestrator |  } 2026-03-16 00:42:30.616562 | orchestrator |  } 2026-03-16 00:42:30.616573 | orchestrator | } 2026-03-16 00:42:30.616584 | orchestrator | 2026-03-16 00:42:30.616595 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-16 00:42:30.616617 | orchestrator | Monday 16 March 2026 00:42:27 +0000 (0:00:00.141) 0:00:27.023 ********** 2026-03-16 00:42:30.616628 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.616639 | orchestrator | 2026-03-16 00:42:30.616650 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-16 00:42:30.616661 | orchestrator | Monday 16 March 2026 00:42:27 +0000 (0:00:00.152) 0:00:27.175 ********** 2026-03-16 00:42:30.616671 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.616682 | orchestrator | 2026-03-16 00:42:30.616693 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-16 00:42:30.616703 | orchestrator | Monday 16 March 2026 00:42:27 +0000 (0:00:00.156) 0:00:27.331 ********** 2026-03-16 00:42:30.616720 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:42:30.616739 | orchestrator | 2026-03-16 00:42:30.616756 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-16 00:42:30.616771 | orchestrator | Monday 16 March 2026 00:42:27 +0000 (0:00:00.129) 0:00:27.461 ********** 2026-03-16 00:42:30.616786 | orchestrator | changed: [testbed-node-4] => { 2026-03-16 00:42:30.616803 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-16 00:42:30.616820 | orchestrator |  "ceph_osd_devices": { 2026-03-16 00:42:30.616838 | orchestrator |  "sdb": { 2026-03-16 00:42:30.616855 | orchestrator |  "osd_lvm_uuid": "ded6401a-969b-5c16-b1be-1b69fe43ded8" 2026-03-16 00:42:30.616873 | orchestrator |  }, 2026-03-16 00:42:30.616891 | orchestrator |  "sdc": { 2026-03-16 00:42:30.616910 | orchestrator |  "osd_lvm_uuid": "01ad088d-533b-5bd8-92eb-284afc0ad32d" 2026-03-16 00:42:30.616927 | orchestrator |  } 2026-03-16 00:42:30.616945 | orchestrator |  }, 2026-03-16 00:42:30.616963 | orchestrator |  "lvm_volumes": [ 2026-03-16 00:42:30.616980 | orchestrator |  { 2026-03-16 00:42:30.616998 | orchestrator |  "data": "osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8", 2026-03-16 00:42:30.617049 | orchestrator |  "data_vg": "ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8" 2026-03-16 00:42:30.617069 | orchestrator |  }, 2026-03-16 00:42:30.617087 | orchestrator |  { 2026-03-16 00:42:30.617104 | orchestrator |  "data": "osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d", 2026-03-16 00:42:30.617122 | orchestrator |  "data_vg": "ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d" 2026-03-16 00:42:30.617139 | orchestrator |  } 2026-03-16 00:42:30.617157 | orchestrator |  ] 2026-03-16 00:42:30.617175 | orchestrator |  } 2026-03-16 00:42:30.617192 | orchestrator | } 2026-03-16 00:42:30.617210 | orchestrator | 2026-03-16 00:42:30.617229 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-16 00:42:30.617247 | orchestrator | Monday 16 March 2026 00:42:28 +0000 (0:00:00.220) 0:00:27.681 ********** 2026-03-16 00:42:30.617259 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-16 00:42:30.617270 | orchestrator | 2026-03-16 00:42:30.617280 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-16 00:42:30.617291 | orchestrator | 2026-03-16 00:42:30.617302 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-16 00:42:30.617312 | orchestrator | Monday 16 March 2026 00:42:29 +0000 (0:00:01.173) 0:00:28.855 ********** 2026-03-16 00:42:30.617323 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-16 00:42:30.617334 | orchestrator | 2026-03-16 00:42:30.617345 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-16 00:42:30.617370 | orchestrator | Monday 16 March 2026 00:42:29 +0000 (0:00:00.594) 0:00:29.449 ********** 2026-03-16 00:42:30.617381 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:42:30.617392 | orchestrator | 2026-03-16 00:42:30.617403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:30.617413 | orchestrator | Monday 16 March 2026 00:42:30 +0000 (0:00:00.217) 0:00:29.667 ********** 2026-03-16 00:42:30.617424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-16 00:42:30.617435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-16 00:42:30.617456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-16 00:42:30.617467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-16 00:42:30.617477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-16 00:42:30.617501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-16 00:42:38.152775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-16 00:42:38.152874 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-16 00:42:38.152885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-16 00:42:38.152892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-16 00:42:38.152898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-16 00:42:38.152904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-16 00:42:38.152910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-16 00:42:38.152916 | orchestrator | 2026-03-16 00:42:38.152923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.152931 | orchestrator | Monday 16 March 2026 00:42:30 +0000 (0:00:00.533) 0:00:30.201 ********** 2026-03-16 00:42:38.152937 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.152944 | orchestrator | 2026-03-16 00:42:38.152950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.152955 | orchestrator | Monday 16 March 2026 00:42:30 +0000 (0:00:00.198) 0:00:30.399 ********** 2026-03-16 00:42:38.152962 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.152968 | orchestrator | 2026-03-16 00:42:38.152974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.152981 | orchestrator | Monday 16 March 2026 00:42:30 +0000 (0:00:00.192) 0:00:30.591 ********** 2026-03-16 00:42:38.152987 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.152993 | orchestrator | 2026-03-16 00:42:38.153023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.153030 | orchestrator | Monday 16 March 2026 00:42:31 +0000 (0:00:00.176) 0:00:30.768 ********** 2026-03-16 00:42:38.153037 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153043 | orchestrator | 2026-03-16 00:42:38.153049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.153056 | orchestrator | Monday 16 March 2026 00:42:31 +0000 (0:00:00.202) 0:00:30.970 ********** 2026-03-16 00:42:38.153062 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153068 | orchestrator | 2026-03-16 00:42:38.153075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.153080 | orchestrator | Monday 16 March 2026 00:42:31 +0000 (0:00:00.182) 0:00:31.153 ********** 2026-03-16 00:42:38.153086 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153092 | orchestrator | 2026-03-16 00:42:38.153098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.153105 | orchestrator | Monday 16 March 2026 00:42:31 +0000 (0:00:00.227) 0:00:31.380 ********** 2026-03-16 00:42:38.153130 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153136 | orchestrator | 2026-03-16 00:42:38.153142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.153148 | orchestrator | Monday 16 March 2026 00:42:31 +0000 (0:00:00.171) 0:00:31.551 ********** 2026-03-16 00:42:38.153154 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153161 | orchestrator | 2026-03-16 00:42:38.153167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.153173 | orchestrator | Monday 16 March 2026 00:42:32 +0000 (0:00:00.179) 0:00:31.731 ********** 2026-03-16 00:42:38.153179 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055) 2026-03-16 00:42:38.153187 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055) 2026-03-16 00:42:38.153192 | orchestrator | 2026-03-16 00:42:38.153198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.153205 | orchestrator | Monday 16 March 2026 00:42:32 +0000 (0:00:00.720) 0:00:32.451 ********** 2026-03-16 00:42:38.153210 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096) 2026-03-16 00:42:38.153216 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096) 2026-03-16 00:42:38.153222 | orchestrator | 2026-03-16 00:42:38.153228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.153234 | orchestrator | Monday 16 March 2026 00:42:33 +0000 (0:00:00.366) 0:00:32.818 ********** 2026-03-16 00:42:38.153242 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a) 2026-03-16 00:42:38.153248 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a) 2026-03-16 00:42:38.153254 | orchestrator | 2026-03-16 00:42:38.153260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.153266 | orchestrator | Monday 16 March 2026 00:42:33 +0000 (0:00:00.420) 0:00:33.238 ********** 2026-03-16 00:42:38.153271 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7) 2026-03-16 00:42:38.153277 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7) 2026-03-16 00:42:38.153282 | orchestrator | 2026-03-16 00:42:38.153288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:42:38.153294 | orchestrator | Monday 16 March 2026 00:42:34 +0000 (0:00:00.440) 0:00:33.678 ********** 2026-03-16 00:42:38.153299 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-16 00:42:38.153305 | orchestrator | 2026-03-16 00:42:38.153311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153332 | orchestrator | Monday 16 March 2026 00:42:34 +0000 (0:00:00.271) 0:00:33.950 ********** 2026-03-16 00:42:38.153339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-16 00:42:38.153345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-16 00:42:38.153351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-16 00:42:38.153357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-16 00:42:38.153363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-16 00:42:38.153370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-16 00:42:38.153375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-16 00:42:38.153381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-16 00:42:38.153395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-16 00:42:38.153401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-16 00:42:38.153407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-16 00:42:38.153428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-16 00:42:38.153435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-16 00:42:38.153441 | orchestrator | 2026-03-16 00:42:38.153448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153455 | orchestrator | Monday 16 March 2026 00:42:34 +0000 (0:00:00.387) 0:00:34.337 ********** 2026-03-16 00:42:38.153462 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153468 | orchestrator | 2026-03-16 00:42:38.153474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153480 | orchestrator | Monday 16 March 2026 00:42:34 +0000 (0:00:00.212) 0:00:34.549 ********** 2026-03-16 00:42:38.153487 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153493 | orchestrator | 2026-03-16 00:42:38.153499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153505 | orchestrator | Monday 16 March 2026 00:42:35 +0000 (0:00:00.185) 0:00:34.735 ********** 2026-03-16 00:42:38.153514 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153520 | orchestrator | 2026-03-16 00:42:38.153526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153532 | orchestrator | Monday 16 March 2026 00:42:35 +0000 (0:00:00.187) 0:00:34.923 ********** 2026-03-16 00:42:38.153538 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153544 | orchestrator | 2026-03-16 00:42:38.153550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153556 | orchestrator | Monday 16 March 2026 00:42:35 +0000 (0:00:00.243) 0:00:35.166 ********** 2026-03-16 00:42:38.153562 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153568 | orchestrator | 2026-03-16 00:42:38.153574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153581 | orchestrator | Monday 16 March 2026 00:42:35 +0000 (0:00:00.209) 0:00:35.376 ********** 2026-03-16 00:42:38.153587 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153594 | orchestrator | 2026-03-16 00:42:38.153599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153606 | orchestrator | Monday 16 March 2026 00:42:36 +0000 (0:00:00.599) 0:00:35.976 ********** 2026-03-16 00:42:38.153612 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153619 | orchestrator | 2026-03-16 00:42:38.153625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153632 | orchestrator | Monday 16 March 2026 00:42:36 +0000 (0:00:00.181) 0:00:36.157 ********** 2026-03-16 00:42:38.153638 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153645 | orchestrator | 2026-03-16 00:42:38.153651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153658 | orchestrator | Monday 16 March 2026 00:42:36 +0000 (0:00:00.179) 0:00:36.337 ********** 2026-03-16 00:42:38.153664 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-16 00:42:38.153671 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-16 00:42:38.153679 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-16 00:42:38.153685 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-16 00:42:38.153691 | orchestrator | 2026-03-16 00:42:38.153697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153703 | orchestrator | Monday 16 March 2026 00:42:37 +0000 (0:00:00.597) 0:00:36.935 ********** 2026-03-16 00:42:38.153709 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153715 | orchestrator | 2026-03-16 00:42:38.153727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153732 | orchestrator | Monday 16 March 2026 00:42:37 +0000 (0:00:00.208) 0:00:37.143 ********** 2026-03-16 00:42:38.153737 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153743 | orchestrator | 2026-03-16 00:42:38.153749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153755 | orchestrator | Monday 16 March 2026 00:42:37 +0000 (0:00:00.189) 0:00:37.332 ********** 2026-03-16 00:42:38.153761 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153768 | orchestrator | 2026-03-16 00:42:38.153774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:42:38.153780 | orchestrator | Monday 16 March 2026 00:42:37 +0000 (0:00:00.214) 0:00:37.546 ********** 2026-03-16 00:42:38.153785 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:38.153791 | orchestrator | 2026-03-16 00:42:38.153803 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-16 00:42:42.606517 | orchestrator | Monday 16 March 2026 00:42:38 +0000 (0:00:00.192) 0:00:37.739 ********** 2026-03-16 00:42:42.606606 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-16 00:42:42.606618 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-16 00:42:42.606627 | orchestrator | 2026-03-16 00:42:42.606636 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-16 00:42:42.606645 | orchestrator | Monday 16 March 2026 00:42:38 +0000 (0:00:00.160) 0:00:37.899 ********** 2026-03-16 00:42:42.606653 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.606661 | orchestrator | 2026-03-16 00:42:42.606669 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-16 00:42:42.606677 | orchestrator | Monday 16 March 2026 00:42:38 +0000 (0:00:00.133) 0:00:38.033 ********** 2026-03-16 00:42:42.606685 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.606693 | orchestrator | 2026-03-16 00:42:42.606701 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-16 00:42:42.606709 | orchestrator | Monday 16 March 2026 00:42:38 +0000 (0:00:00.134) 0:00:38.167 ********** 2026-03-16 00:42:42.606716 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.606726 | orchestrator | 2026-03-16 00:42:42.606738 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-16 00:42:42.606751 | orchestrator | Monday 16 March 2026 00:42:38 +0000 (0:00:00.257) 0:00:38.425 ********** 2026-03-16 00:42:42.606764 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:42:42.606784 | orchestrator | 2026-03-16 00:42:42.606798 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-16 00:42:42.606812 | orchestrator | Monday 16 March 2026 00:42:38 +0000 (0:00:00.133) 0:00:38.558 ********** 2026-03-16 00:42:42.606825 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '20eacd0a-f744-531e-8511-c5afb936ef86'}}) 2026-03-16 00:42:42.606838 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c2695ca6-70a1-5c1a-b7de-886954e6bf07'}}) 2026-03-16 00:42:42.606850 | orchestrator | 2026-03-16 00:42:42.606862 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-16 00:42:42.606875 | orchestrator | Monday 16 March 2026 00:42:39 +0000 (0:00:00.152) 0:00:38.711 ********** 2026-03-16 00:42:42.606888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '20eacd0a-f744-531e-8511-c5afb936ef86'}})  2026-03-16 00:42:42.606903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c2695ca6-70a1-5c1a-b7de-886954e6bf07'}})  2026-03-16 00:42:42.606917 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.606930 | orchestrator | 2026-03-16 00:42:42.606943 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-16 00:42:42.606957 | orchestrator | Monday 16 March 2026 00:42:39 +0000 (0:00:00.138) 0:00:38.849 ********** 2026-03-16 00:42:42.606970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '20eacd0a-f744-531e-8511-c5afb936ef86'}})  2026-03-16 00:42:42.607028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c2695ca6-70a1-5c1a-b7de-886954e6bf07'}})  2026-03-16 00:42:42.607038 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.607047 | orchestrator | 2026-03-16 00:42:42.607056 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-16 00:42:42.607066 | orchestrator | Monday 16 March 2026 00:42:39 +0000 (0:00:00.138) 0:00:38.988 ********** 2026-03-16 00:42:42.607074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '20eacd0a-f744-531e-8511-c5afb936ef86'}})  2026-03-16 00:42:42.607084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c2695ca6-70a1-5c1a-b7de-886954e6bf07'}})  2026-03-16 00:42:42.607093 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.607102 | orchestrator | 2026-03-16 00:42:42.607111 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-16 00:42:42.607120 | orchestrator | Monday 16 March 2026 00:42:39 +0000 (0:00:00.130) 0:00:39.119 ********** 2026-03-16 00:42:42.607129 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:42:42.607138 | orchestrator | 2026-03-16 00:42:42.607147 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-16 00:42:42.607156 | orchestrator | Monday 16 March 2026 00:42:39 +0000 (0:00:00.108) 0:00:39.228 ********** 2026-03-16 00:42:42.607164 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:42:42.607173 | orchestrator | 2026-03-16 00:42:42.607196 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-16 00:42:42.607206 | orchestrator | Monday 16 March 2026 00:42:39 +0000 (0:00:00.176) 0:00:39.405 ********** 2026-03-16 00:42:42.607214 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.607222 | orchestrator | 2026-03-16 00:42:42.607230 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-16 00:42:42.607238 | orchestrator | Monday 16 March 2026 00:42:39 +0000 (0:00:00.148) 0:00:39.554 ********** 2026-03-16 00:42:42.607246 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.607253 | orchestrator | 2026-03-16 00:42:42.607261 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-16 00:42:42.607269 | orchestrator | Monday 16 March 2026 00:42:40 +0000 (0:00:00.174) 0:00:39.729 ********** 2026-03-16 00:42:42.607277 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.607285 | orchestrator | 2026-03-16 00:42:42.607292 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-16 00:42:42.607300 | orchestrator | Monday 16 March 2026 00:42:40 +0000 (0:00:00.153) 0:00:39.882 ********** 2026-03-16 00:42:42.607308 | orchestrator | ok: [testbed-node-5] => { 2026-03-16 00:42:42.607316 | orchestrator |  "ceph_osd_devices": { 2026-03-16 00:42:42.607324 | orchestrator |  "sdb": { 2026-03-16 00:42:42.607349 | orchestrator |  "osd_lvm_uuid": "20eacd0a-f744-531e-8511-c5afb936ef86" 2026-03-16 00:42:42.607357 | orchestrator |  }, 2026-03-16 00:42:42.607365 | orchestrator |  "sdc": { 2026-03-16 00:42:42.607373 | orchestrator |  "osd_lvm_uuid": "c2695ca6-70a1-5c1a-b7de-886954e6bf07" 2026-03-16 00:42:42.607381 | orchestrator |  } 2026-03-16 00:42:42.607389 | orchestrator |  } 2026-03-16 00:42:42.607397 | orchestrator | } 2026-03-16 00:42:42.607405 | orchestrator | 2026-03-16 00:42:42.607413 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-16 00:42:42.607421 | orchestrator | Monday 16 March 2026 00:42:40 +0000 (0:00:00.165) 0:00:40.048 ********** 2026-03-16 00:42:42.607429 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.607436 | orchestrator | 2026-03-16 00:42:42.607444 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-16 00:42:42.607452 | orchestrator | Monday 16 March 2026 00:42:40 +0000 (0:00:00.431) 0:00:40.480 ********** 2026-03-16 00:42:42.607459 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.607476 | orchestrator | 2026-03-16 00:42:42.607484 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-16 00:42:42.607491 | orchestrator | Monday 16 March 2026 00:42:41 +0000 (0:00:00.152) 0:00:40.632 ********** 2026-03-16 00:42:42.607499 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:42:42.607507 | orchestrator | 2026-03-16 00:42:42.607514 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-16 00:42:42.607522 | orchestrator | Monday 16 March 2026 00:42:41 +0000 (0:00:00.176) 0:00:40.808 ********** 2026-03-16 00:42:42.607530 | orchestrator | changed: [testbed-node-5] => { 2026-03-16 00:42:42.607538 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-16 00:42:42.607545 | orchestrator |  "ceph_osd_devices": { 2026-03-16 00:42:42.607553 | orchestrator |  "sdb": { 2026-03-16 00:42:42.607561 | orchestrator |  "osd_lvm_uuid": "20eacd0a-f744-531e-8511-c5afb936ef86" 2026-03-16 00:42:42.607569 | orchestrator |  }, 2026-03-16 00:42:42.607577 | orchestrator |  "sdc": { 2026-03-16 00:42:42.607584 | orchestrator |  "osd_lvm_uuid": "c2695ca6-70a1-5c1a-b7de-886954e6bf07" 2026-03-16 00:42:42.607592 | orchestrator |  } 2026-03-16 00:42:42.607600 | orchestrator |  }, 2026-03-16 00:42:42.607607 | orchestrator |  "lvm_volumes": [ 2026-03-16 00:42:42.607615 | orchestrator |  { 2026-03-16 00:42:42.607623 | orchestrator |  "data": "osd-block-20eacd0a-f744-531e-8511-c5afb936ef86", 2026-03-16 00:42:42.607631 | orchestrator |  "data_vg": "ceph-20eacd0a-f744-531e-8511-c5afb936ef86" 2026-03-16 00:42:42.607639 | orchestrator |  }, 2026-03-16 00:42:42.607647 | orchestrator |  { 2026-03-16 00:42:42.607654 | orchestrator |  "data": "osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07", 2026-03-16 00:42:42.607666 | orchestrator |  "data_vg": "ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07" 2026-03-16 00:42:42.607674 | orchestrator |  } 2026-03-16 00:42:42.607682 | orchestrator |  ] 2026-03-16 00:42:42.607693 | orchestrator |  } 2026-03-16 00:42:42.607701 | orchestrator | } 2026-03-16 00:42:42.607709 | orchestrator | 2026-03-16 00:42:42.607717 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-16 00:42:42.607725 | orchestrator | Monday 16 March 2026 00:42:41 +0000 (0:00:00.237) 0:00:41.046 ********** 2026-03-16 00:42:42.607733 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-16 00:42:42.607741 | orchestrator | 2026-03-16 00:42:42.607748 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:42:42.607756 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-16 00:42:42.607765 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-16 00:42:42.607773 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-16 00:42:42.607781 | orchestrator | 2026-03-16 00:42:42.607789 | orchestrator | 2026-03-16 00:42:42.607797 | orchestrator | 2026-03-16 00:42:42.607805 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:42:42.607812 | orchestrator | Monday 16 March 2026 00:42:42 +0000 (0:00:01.138) 0:00:42.185 ********** 2026-03-16 00:42:42.607820 | orchestrator | =============================================================================== 2026-03-16 00:42:42.607828 | orchestrator | Write configuration file ------------------------------------------------ 4.10s 2026-03-16 00:42:42.607836 | orchestrator | Add known links to the list of available block devices ------------------ 1.33s 2026-03-16 00:42:42.607843 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2026-03-16 00:42:42.607851 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2026-03-16 00:42:42.607864 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.07s 2026-03-16 00:42:42.607872 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-03-16 00:42:42.607880 | orchestrator | Print configuration data ------------------------------------------------ 0.92s 2026-03-16 00:42:42.607888 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-03-16 00:42:42.607895 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2026-03-16 00:42:42.607903 | orchestrator | Set DB devices config data ---------------------------------------------- 0.73s 2026-03-16 00:42:42.607911 | orchestrator | Print WAL devices ------------------------------------------------------- 0.73s 2026-03-16 00:42:42.607919 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-16 00:42:42.607926 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-03-16 00:42:42.607939 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.67s 2026-03-16 00:42:42.857536 | orchestrator | Get initial list of available block devices ----------------------------- 0.65s 2026-03-16 00:42:42.857636 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-03-16 00:42:42.857650 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-03-16 00:42:42.857662 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-03-16 00:42:42.857673 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2026-03-16 00:42:42.857684 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.53s 2026-03-16 00:43:05.325476 | orchestrator | 2026-03-16 00:43:05 | INFO  | Task 689e6e58-8733-4dd8-89b9-b63e50d8956f (sync inventory) is running in background. Output coming soon. 2026-03-16 00:43:33.719735 | orchestrator | 2026-03-16 00:43:07 | INFO  | Starting group_vars file reorganization 2026-03-16 00:43:33.719840 | orchestrator | 2026-03-16 00:43:07 | INFO  | Moved 0 file(s) to their respective directories 2026-03-16 00:43:33.719851 | orchestrator | 2026-03-16 00:43:07 | INFO  | Group_vars file reorganization completed 2026-03-16 00:43:33.719858 | orchestrator | 2026-03-16 00:43:10 | INFO  | Starting variable preparation from inventory 2026-03-16 00:43:33.719864 | orchestrator | 2026-03-16 00:43:13 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-16 00:43:33.719870 | orchestrator | 2026-03-16 00:43:13 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-16 00:43:33.719875 | orchestrator | 2026-03-16 00:43:13 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-16 00:43:33.719881 | orchestrator | 2026-03-16 00:43:13 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-16 00:43:33.719887 | orchestrator | 2026-03-16 00:43:13 | INFO  | Variable preparation completed 2026-03-16 00:43:33.719893 | orchestrator | 2026-03-16 00:43:15 | INFO  | Starting inventory overwrite handling 2026-03-16 00:43:33.719899 | orchestrator | 2026-03-16 00:43:15 | INFO  | Handling group overwrites in 99-overwrite 2026-03-16 00:43:33.719904 | orchestrator | 2026-03-16 00:43:15 | INFO  | Removing group frr:children from 60-generic 2026-03-16 00:43:33.719910 | orchestrator | 2026-03-16 00:43:15 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-16 00:43:33.719981 | orchestrator | 2026-03-16 00:43:15 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-16 00:43:33.719990 | orchestrator | 2026-03-16 00:43:15 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-16 00:43:33.719995 | orchestrator | 2026-03-16 00:43:15 | INFO  | Handling group overwrites in 20-roles 2026-03-16 00:43:33.720001 | orchestrator | 2026-03-16 00:43:15 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-16 00:43:33.720024 | orchestrator | 2026-03-16 00:43:15 | INFO  | Removed 5 group(s) in total 2026-03-16 00:43:33.720030 | orchestrator | 2026-03-16 00:43:15 | INFO  | Inventory overwrite handling completed 2026-03-16 00:43:33.720036 | orchestrator | 2026-03-16 00:43:16 | INFO  | Starting merge of inventory files 2026-03-16 00:43:33.720041 | orchestrator | 2026-03-16 00:43:16 | INFO  | Inventory files merged successfully 2026-03-16 00:43:33.720047 | orchestrator | 2026-03-16 00:43:21 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-16 00:43:33.720052 | orchestrator | 2026-03-16 00:43:32 | INFO  | Successfully wrote ClusterShell configuration 2026-03-16 00:43:33.720058 | orchestrator | [master 56faca7] 2026-03-16-00-43 2026-03-16 00:43:33.720065 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-16 00:43:36.012319 | orchestrator | 2026-03-16 00:43:36 | INFO  | Task 91eed584-cf57-4c49-a296-88484e1d2cc7 (ceph-create-lvm-devices) was prepared for execution. 2026-03-16 00:43:36.012387 | orchestrator | 2026-03-16 00:43:36 | INFO  | It takes a moment until task 91eed584-cf57-4c49-a296-88484e1d2cc7 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-16 00:43:47.391760 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-16 00:43:47.391846 | orchestrator | 2.16.14 2026-03-16 00:43:47.391854 | orchestrator | 2026-03-16 00:43:47.391859 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-16 00:43:47.391864 | orchestrator | 2026-03-16 00:43:47.391868 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-16 00:43:47.391873 | orchestrator | Monday 16 March 2026 00:43:40 +0000 (0:00:00.313) 0:00:00.313 ********** 2026-03-16 00:43:47.391878 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-16 00:43:47.391882 | orchestrator | 2026-03-16 00:43:47.391886 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-16 00:43:47.391890 | orchestrator | Monday 16 March 2026 00:43:40 +0000 (0:00:00.234) 0:00:00.548 ********** 2026-03-16 00:43:47.391894 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:43:47.391898 | orchestrator | 2026-03-16 00:43:47.391902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.391983 | orchestrator | Monday 16 March 2026 00:43:41 +0000 (0:00:00.219) 0:00:00.767 ********** 2026-03-16 00:43:47.391994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-16 00:43:47.392001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-16 00:43:47.392009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-16 00:43:47.392013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-16 00:43:47.392017 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-16 00:43:47.392021 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-16 00:43:47.392025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-16 00:43:47.392029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-16 00:43:47.392033 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-16 00:43:47.392037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-16 00:43:47.392041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-16 00:43:47.392045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-16 00:43:47.392048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-16 00:43:47.392073 | orchestrator | 2026-03-16 00:43:47.392081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392089 | orchestrator | Monday 16 March 2026 00:43:41 +0000 (0:00:00.543) 0:00:01.310 ********** 2026-03-16 00:43:47.392096 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392102 | orchestrator | 2026-03-16 00:43:47.392108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392114 | orchestrator | Monday 16 March 2026 00:43:41 +0000 (0:00:00.191) 0:00:01.502 ********** 2026-03-16 00:43:47.392120 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392125 | orchestrator | 2026-03-16 00:43:47.392131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392137 | orchestrator | Monday 16 March 2026 00:43:41 +0000 (0:00:00.199) 0:00:01.702 ********** 2026-03-16 00:43:47.392142 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392148 | orchestrator | 2026-03-16 00:43:47.392154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392160 | orchestrator | Monday 16 March 2026 00:43:42 +0000 (0:00:00.181) 0:00:01.884 ********** 2026-03-16 00:43:47.392165 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392171 | orchestrator | 2026-03-16 00:43:47.392177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392183 | orchestrator | Monday 16 March 2026 00:43:42 +0000 (0:00:00.172) 0:00:02.057 ********** 2026-03-16 00:43:47.392189 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392195 | orchestrator | 2026-03-16 00:43:47.392202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392208 | orchestrator | Monday 16 March 2026 00:43:42 +0000 (0:00:00.168) 0:00:02.226 ********** 2026-03-16 00:43:47.392214 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392220 | orchestrator | 2026-03-16 00:43:47.392226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392234 | orchestrator | Monday 16 March 2026 00:43:42 +0000 (0:00:00.183) 0:00:02.409 ********** 2026-03-16 00:43:47.392238 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392244 | orchestrator | 2026-03-16 00:43:47.392252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392260 | orchestrator | Monday 16 March 2026 00:43:42 +0000 (0:00:00.184) 0:00:02.594 ********** 2026-03-16 00:43:47.392266 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392272 | orchestrator | 2026-03-16 00:43:47.392278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392285 | orchestrator | Monday 16 March 2026 00:43:43 +0000 (0:00:00.194) 0:00:02.788 ********** 2026-03-16 00:43:47.392290 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa) 2026-03-16 00:43:47.392296 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa) 2026-03-16 00:43:47.392300 | orchestrator | 2026-03-16 00:43:47.392304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392322 | orchestrator | Monday 16 March 2026 00:43:43 +0000 (0:00:00.424) 0:00:03.212 ********** 2026-03-16 00:43:47.392327 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9) 2026-03-16 00:43:47.392331 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9) 2026-03-16 00:43:47.392336 | orchestrator | 2026-03-16 00:43:47.392340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392344 | orchestrator | Monday 16 March 2026 00:43:44 +0000 (0:00:00.640) 0:00:03.853 ********** 2026-03-16 00:43:47.392349 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c) 2026-03-16 00:43:47.392353 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c) 2026-03-16 00:43:47.392364 | orchestrator | 2026-03-16 00:43:47.392368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392372 | orchestrator | Monday 16 March 2026 00:43:44 +0000 (0:00:00.517) 0:00:04.370 ********** 2026-03-16 00:43:47.392377 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f) 2026-03-16 00:43:47.392381 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f) 2026-03-16 00:43:47.392385 | orchestrator | 2026-03-16 00:43:47.392389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:43:47.392393 | orchestrator | Monday 16 March 2026 00:43:45 +0000 (0:00:00.750) 0:00:05.120 ********** 2026-03-16 00:43:47.392398 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-16 00:43:47.392402 | orchestrator | 2026-03-16 00:43:47.392406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:47.392411 | orchestrator | Monday 16 March 2026 00:43:45 +0000 (0:00:00.285) 0:00:05.405 ********** 2026-03-16 00:43:47.392415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-16 00:43:47.392420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-16 00:43:47.392424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-16 00:43:47.392429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-16 00:43:47.392433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-16 00:43:47.392437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-16 00:43:47.392441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-16 00:43:47.392445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-16 00:43:47.392449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-16 00:43:47.392454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-16 00:43:47.392458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-16 00:43:47.392477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-16 00:43:47.392481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-16 00:43:47.392486 | orchestrator | 2026-03-16 00:43:47.392490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:47.392494 | orchestrator | Monday 16 March 2026 00:43:46 +0000 (0:00:00.360) 0:00:05.766 ********** 2026-03-16 00:43:47.392499 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392503 | orchestrator | 2026-03-16 00:43:47.392507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:47.392511 | orchestrator | Monday 16 March 2026 00:43:46 +0000 (0:00:00.212) 0:00:05.979 ********** 2026-03-16 00:43:47.392516 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392520 | orchestrator | 2026-03-16 00:43:47.392524 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:47.392529 | orchestrator | Monday 16 March 2026 00:43:46 +0000 (0:00:00.192) 0:00:06.172 ********** 2026-03-16 00:43:47.392533 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392537 | orchestrator | 2026-03-16 00:43:47.392541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:47.392546 | orchestrator | Monday 16 March 2026 00:43:46 +0000 (0:00:00.180) 0:00:06.352 ********** 2026-03-16 00:43:47.392550 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392558 | orchestrator | 2026-03-16 00:43:47.392562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:47.392566 | orchestrator | Monday 16 March 2026 00:43:46 +0000 (0:00:00.199) 0:00:06.551 ********** 2026-03-16 00:43:47.392571 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392575 | orchestrator | 2026-03-16 00:43:47.392579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:47.392584 | orchestrator | Monday 16 March 2026 00:43:47 +0000 (0:00:00.223) 0:00:06.774 ********** 2026-03-16 00:43:47.392588 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392592 | orchestrator | 2026-03-16 00:43:47.392597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:47.392601 | orchestrator | Monday 16 March 2026 00:43:47 +0000 (0:00:00.171) 0:00:06.946 ********** 2026-03-16 00:43:47.392607 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:47.392613 | orchestrator | 2026-03-16 00:43:47.392622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:55.471207 | orchestrator | Monday 16 March 2026 00:43:47 +0000 (0:00:00.195) 0:00:07.142 ********** 2026-03-16 00:43:55.471281 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471290 | orchestrator | 2026-03-16 00:43:55.471297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:55.471303 | orchestrator | Monday 16 March 2026 00:43:47 +0000 (0:00:00.197) 0:00:07.340 ********** 2026-03-16 00:43:55.471310 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-16 00:43:55.471316 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-16 00:43:55.471323 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-16 00:43:55.471329 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-16 00:43:55.471334 | orchestrator | 2026-03-16 00:43:55.471340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:55.471346 | orchestrator | Monday 16 March 2026 00:43:48 +0000 (0:00:00.932) 0:00:08.273 ********** 2026-03-16 00:43:55.471352 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471358 | orchestrator | 2026-03-16 00:43:55.471364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:55.471370 | orchestrator | Monday 16 March 2026 00:43:48 +0000 (0:00:00.188) 0:00:08.461 ********** 2026-03-16 00:43:55.471377 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471383 | orchestrator | 2026-03-16 00:43:55.471389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:55.471395 | orchestrator | Monday 16 March 2026 00:43:48 +0000 (0:00:00.178) 0:00:08.639 ********** 2026-03-16 00:43:55.471402 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471408 | orchestrator | 2026-03-16 00:43:55.471414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:43:55.471421 | orchestrator | Monday 16 March 2026 00:43:49 +0000 (0:00:00.193) 0:00:08.832 ********** 2026-03-16 00:43:55.471427 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471433 | orchestrator | 2026-03-16 00:43:55.471439 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-16 00:43:55.471446 | orchestrator | Monday 16 March 2026 00:43:49 +0000 (0:00:00.224) 0:00:09.057 ********** 2026-03-16 00:43:55.471452 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471458 | orchestrator | 2026-03-16 00:43:55.471465 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-16 00:43:55.471471 | orchestrator | Monday 16 March 2026 00:43:49 +0000 (0:00:00.116) 0:00:09.173 ********** 2026-03-16 00:43:55.471478 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71e0430a-6bf1-53ec-905e-7c884e89f784'}}) 2026-03-16 00:43:55.471485 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '40b418b1-0bd6-568c-82b5-8ddc4abd3365'}}) 2026-03-16 00:43:55.471491 | orchestrator | 2026-03-16 00:43:55.471497 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-16 00:43:55.471519 | orchestrator | Monday 16 March 2026 00:43:49 +0000 (0:00:00.158) 0:00:09.332 ********** 2026-03-16 00:43:55.471527 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'}) 2026-03-16 00:43:55.471534 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'}) 2026-03-16 00:43:55.471541 | orchestrator | 2026-03-16 00:43:55.471547 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-16 00:43:55.471562 | orchestrator | Monday 16 March 2026 00:43:51 +0000 (0:00:02.106) 0:00:11.439 ********** 2026-03-16 00:43:55.471569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:43:55.471576 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:43:55.471583 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471589 | orchestrator | 2026-03-16 00:43:55.471595 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-16 00:43:55.471601 | orchestrator | Monday 16 March 2026 00:43:51 +0000 (0:00:00.143) 0:00:11.583 ********** 2026-03-16 00:43:55.471608 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'}) 2026-03-16 00:43:55.471614 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'}) 2026-03-16 00:43:55.471620 | orchestrator | 2026-03-16 00:43:55.471627 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-16 00:43:55.471633 | orchestrator | Monday 16 March 2026 00:43:53 +0000 (0:00:01.497) 0:00:13.081 ********** 2026-03-16 00:43:55.471640 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:43:55.471646 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:43:55.471652 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471658 | orchestrator | 2026-03-16 00:43:55.471664 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-16 00:43:55.471669 | orchestrator | Monday 16 March 2026 00:43:53 +0000 (0:00:00.155) 0:00:13.237 ********** 2026-03-16 00:43:55.471683 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471687 | orchestrator | 2026-03-16 00:43:55.471690 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-16 00:43:55.471694 | orchestrator | Monday 16 March 2026 00:43:53 +0000 (0:00:00.134) 0:00:13.371 ********** 2026-03-16 00:43:55.471698 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:43:55.471702 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:43:55.471705 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471709 | orchestrator | 2026-03-16 00:43:55.471713 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-16 00:43:55.471717 | orchestrator | Monday 16 March 2026 00:43:53 +0000 (0:00:00.372) 0:00:13.743 ********** 2026-03-16 00:43:55.471720 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471724 | orchestrator | 2026-03-16 00:43:55.471728 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-16 00:43:55.471732 | orchestrator | Monday 16 March 2026 00:43:54 +0000 (0:00:00.158) 0:00:13.902 ********** 2026-03-16 00:43:55.471741 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:43:55.471745 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:43:55.471748 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471752 | orchestrator | 2026-03-16 00:43:55.471756 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-16 00:43:55.471760 | orchestrator | Monday 16 March 2026 00:43:54 +0000 (0:00:00.156) 0:00:14.058 ********** 2026-03-16 00:43:55.471763 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471767 | orchestrator | 2026-03-16 00:43:55.471771 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-16 00:43:55.471775 | orchestrator | Monday 16 March 2026 00:43:54 +0000 (0:00:00.129) 0:00:14.187 ********** 2026-03-16 00:43:55.471778 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:43:55.471782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:43:55.471786 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471790 | orchestrator | 2026-03-16 00:43:55.471793 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-16 00:43:55.471798 | orchestrator | Monday 16 March 2026 00:43:54 +0000 (0:00:00.159) 0:00:14.347 ********** 2026-03-16 00:43:55.471809 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:43:55.471814 | orchestrator | 2026-03-16 00:43:55.471818 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-16 00:43:55.471823 | orchestrator | Monday 16 March 2026 00:43:54 +0000 (0:00:00.149) 0:00:14.496 ********** 2026-03-16 00:43:55.471827 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:43:55.471832 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:43:55.471836 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471841 | orchestrator | 2026-03-16 00:43:55.471845 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-16 00:43:55.471849 | orchestrator | Monday 16 March 2026 00:43:54 +0000 (0:00:00.177) 0:00:14.674 ********** 2026-03-16 00:43:55.471854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:43:55.471862 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:43:55.471867 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471871 | orchestrator | 2026-03-16 00:43:55.471875 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-16 00:43:55.471880 | orchestrator | Monday 16 March 2026 00:43:55 +0000 (0:00:00.193) 0:00:14.867 ********** 2026-03-16 00:43:55.471884 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:43:55.471889 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:43:55.471896 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471927 | orchestrator | 2026-03-16 00:43:55.471934 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-16 00:43:55.471941 | orchestrator | Monday 16 March 2026 00:43:55 +0000 (0:00:00.191) 0:00:15.059 ********** 2026-03-16 00:43:55.471952 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:43:55.471959 | orchestrator | 2026-03-16 00:43:55.471964 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-16 00:43:55.471973 | orchestrator | Monday 16 March 2026 00:43:55 +0000 (0:00:00.162) 0:00:15.222 ********** 2026-03-16 00:44:03.187462 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.187597 | orchestrator | 2026-03-16 00:44:03.187624 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-16 00:44:03.187646 | orchestrator | Monday 16 March 2026 00:43:55 +0000 (0:00:00.139) 0:00:15.362 ********** 2026-03-16 00:44:03.187664 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.187683 | orchestrator | 2026-03-16 00:44:03.187702 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-16 00:44:03.187720 | orchestrator | Monday 16 March 2026 00:43:55 +0000 (0:00:00.160) 0:00:15.522 ********** 2026-03-16 00:44:03.187738 | orchestrator | ok: [testbed-node-3] => { 2026-03-16 00:44:03.187758 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-16 00:44:03.187778 | orchestrator | } 2026-03-16 00:44:03.187798 | orchestrator | 2026-03-16 00:44:03.187816 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-16 00:44:03.187836 | orchestrator | Monday 16 March 2026 00:43:56 +0000 (0:00:00.335) 0:00:15.857 ********** 2026-03-16 00:44:03.187854 | orchestrator | ok: [testbed-node-3] => { 2026-03-16 00:44:03.187873 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-16 00:44:03.187952 | orchestrator | } 2026-03-16 00:44:03.187980 | orchestrator | 2026-03-16 00:44:03.187999 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-16 00:44:03.188018 | orchestrator | Monday 16 March 2026 00:43:56 +0000 (0:00:00.167) 0:00:16.024 ********** 2026-03-16 00:44:03.188037 | orchestrator | ok: [testbed-node-3] => { 2026-03-16 00:44:03.188057 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-16 00:44:03.188077 | orchestrator | } 2026-03-16 00:44:03.188096 | orchestrator | 2026-03-16 00:44:03.188123 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-16 00:44:03.188150 | orchestrator | Monday 16 March 2026 00:43:56 +0000 (0:00:00.165) 0:00:16.190 ********** 2026-03-16 00:44:03.188171 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:44:03.188192 | orchestrator | 2026-03-16 00:44:03.188220 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-16 00:44:03.188252 | orchestrator | Monday 16 March 2026 00:43:57 +0000 (0:00:00.785) 0:00:16.975 ********** 2026-03-16 00:44:03.188275 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:44:03.188296 | orchestrator | 2026-03-16 00:44:03.188316 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-16 00:44:03.188337 | orchestrator | Monday 16 March 2026 00:43:57 +0000 (0:00:00.538) 0:00:17.513 ********** 2026-03-16 00:44:03.188357 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:44:03.188376 | orchestrator | 2026-03-16 00:44:03.188396 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-16 00:44:03.188416 | orchestrator | Monday 16 March 2026 00:43:58 +0000 (0:00:00.564) 0:00:18.078 ********** 2026-03-16 00:44:03.188437 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:44:03.188457 | orchestrator | 2026-03-16 00:44:03.188477 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-16 00:44:03.188498 | orchestrator | Monday 16 March 2026 00:43:58 +0000 (0:00:00.186) 0:00:18.265 ********** 2026-03-16 00:44:03.188517 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.188538 | orchestrator | 2026-03-16 00:44:03.188557 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-16 00:44:03.188577 | orchestrator | Monday 16 March 2026 00:43:58 +0000 (0:00:00.133) 0:00:18.398 ********** 2026-03-16 00:44:03.188598 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.188617 | orchestrator | 2026-03-16 00:44:03.188635 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-16 00:44:03.188695 | orchestrator | Monday 16 March 2026 00:43:58 +0000 (0:00:00.110) 0:00:18.509 ********** 2026-03-16 00:44:03.188742 | orchestrator | ok: [testbed-node-3] => { 2026-03-16 00:44:03.188763 | orchestrator |  "vgs_report": { 2026-03-16 00:44:03.188786 | orchestrator |  "vg": [] 2026-03-16 00:44:03.188812 | orchestrator |  } 2026-03-16 00:44:03.188831 | orchestrator | } 2026-03-16 00:44:03.188850 | orchestrator | 2026-03-16 00:44:03.188869 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-16 00:44:03.188915 | orchestrator | Monday 16 March 2026 00:43:58 +0000 (0:00:00.166) 0:00:18.675 ********** 2026-03-16 00:44:03.188936 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.188955 | orchestrator | 2026-03-16 00:44:03.188972 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-16 00:44:03.188989 | orchestrator | Monday 16 March 2026 00:43:59 +0000 (0:00:00.152) 0:00:18.827 ********** 2026-03-16 00:44:03.189006 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189024 | orchestrator | 2026-03-16 00:44:03.189041 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-16 00:44:03.189057 | orchestrator | Monday 16 March 2026 00:43:59 +0000 (0:00:00.193) 0:00:19.021 ********** 2026-03-16 00:44:03.189075 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189092 | orchestrator | 2026-03-16 00:44:03.189110 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-16 00:44:03.189128 | orchestrator | Monday 16 March 2026 00:43:59 +0000 (0:00:00.585) 0:00:19.607 ********** 2026-03-16 00:44:03.189147 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189164 | orchestrator | 2026-03-16 00:44:03.189182 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-16 00:44:03.189202 | orchestrator | Monday 16 March 2026 00:44:00 +0000 (0:00:00.184) 0:00:19.792 ********** 2026-03-16 00:44:03.189220 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189239 | orchestrator | 2026-03-16 00:44:03.189258 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-16 00:44:03.189277 | orchestrator | Monday 16 March 2026 00:44:00 +0000 (0:00:00.185) 0:00:19.978 ********** 2026-03-16 00:44:03.189295 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189313 | orchestrator | 2026-03-16 00:44:03.189331 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-16 00:44:03.189351 | orchestrator | Monday 16 March 2026 00:44:00 +0000 (0:00:00.175) 0:00:20.153 ********** 2026-03-16 00:44:03.189369 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189387 | orchestrator | 2026-03-16 00:44:03.189400 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-16 00:44:03.189418 | orchestrator | Monday 16 March 2026 00:44:00 +0000 (0:00:00.208) 0:00:20.361 ********** 2026-03-16 00:44:03.189454 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189465 | orchestrator | 2026-03-16 00:44:03.189477 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-16 00:44:03.189487 | orchestrator | Monday 16 March 2026 00:44:00 +0000 (0:00:00.185) 0:00:20.547 ********** 2026-03-16 00:44:03.189498 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189509 | orchestrator | 2026-03-16 00:44:03.189520 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-16 00:44:03.189531 | orchestrator | Monday 16 March 2026 00:44:00 +0000 (0:00:00.169) 0:00:20.717 ********** 2026-03-16 00:44:03.189542 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189553 | orchestrator | 2026-03-16 00:44:03.189572 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-16 00:44:03.189584 | orchestrator | Monday 16 March 2026 00:44:01 +0000 (0:00:00.177) 0:00:20.895 ********** 2026-03-16 00:44:03.189595 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189605 | orchestrator | 2026-03-16 00:44:03.189616 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-16 00:44:03.189627 | orchestrator | Monday 16 March 2026 00:44:01 +0000 (0:00:00.159) 0:00:21.054 ********** 2026-03-16 00:44:03.189657 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189671 | orchestrator | 2026-03-16 00:44:03.189682 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-16 00:44:03.189693 | orchestrator | Monday 16 March 2026 00:44:01 +0000 (0:00:00.188) 0:00:21.243 ********** 2026-03-16 00:44:03.189704 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189715 | orchestrator | 2026-03-16 00:44:03.189726 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-16 00:44:03.189737 | orchestrator | Monday 16 March 2026 00:44:01 +0000 (0:00:00.150) 0:00:21.393 ********** 2026-03-16 00:44:03.189748 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189759 | orchestrator | 2026-03-16 00:44:03.189770 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-16 00:44:03.189781 | orchestrator | Monday 16 March 2026 00:44:01 +0000 (0:00:00.139) 0:00:21.533 ********** 2026-03-16 00:44:03.189793 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:44:03.189806 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:44:03.189817 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189827 | orchestrator | 2026-03-16 00:44:03.189839 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-16 00:44:03.189849 | orchestrator | Monday 16 March 2026 00:44:02 +0000 (0:00:00.467) 0:00:22.000 ********** 2026-03-16 00:44:03.189860 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:44:03.189871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:44:03.189885 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.189935 | orchestrator | 2026-03-16 00:44:03.189955 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-16 00:44:03.189973 | orchestrator | Monday 16 March 2026 00:44:02 +0000 (0:00:00.162) 0:00:22.163 ********** 2026-03-16 00:44:03.189992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:44:03.190004 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:44:03.190074 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.190089 | orchestrator | 2026-03-16 00:44:03.190100 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-16 00:44:03.190111 | orchestrator | Monday 16 March 2026 00:44:02 +0000 (0:00:00.240) 0:00:22.404 ********** 2026-03-16 00:44:03.190123 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:44:03.190134 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:44:03.190145 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.190165 | orchestrator | 2026-03-16 00:44:03.190177 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-16 00:44:03.190192 | orchestrator | Monday 16 March 2026 00:44:02 +0000 (0:00:00.187) 0:00:22.592 ********** 2026-03-16 00:44:03.190208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:44:03.190219 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:44:03.190239 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:03.190250 | orchestrator | 2026-03-16 00:44:03.190261 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-16 00:44:03.190272 | orchestrator | Monday 16 March 2026 00:44:03 +0000 (0:00:00.193) 0:00:22.785 ********** 2026-03-16 00:44:03.190294 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:44:09.031546 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:44:09.031646 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:09.031660 | orchestrator | 2026-03-16 00:44:09.031670 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-16 00:44:09.031681 | orchestrator | Monday 16 March 2026 00:44:03 +0000 (0:00:00.158) 0:00:22.943 ********** 2026-03-16 00:44:09.031690 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:44:09.031700 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:44:09.031709 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:09.031718 | orchestrator | 2026-03-16 00:44:09.031745 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-16 00:44:09.031754 | orchestrator | Monday 16 March 2026 00:44:03 +0000 (0:00:00.278) 0:00:23.222 ********** 2026-03-16 00:44:09.031763 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:44:09.031773 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:44:09.031782 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:09.031791 | orchestrator | 2026-03-16 00:44:09.031800 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-16 00:44:09.031808 | orchestrator | Monday 16 March 2026 00:44:03 +0000 (0:00:00.195) 0:00:23.418 ********** 2026-03-16 00:44:09.031829 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:44:09.031840 | orchestrator | 2026-03-16 00:44:09.031849 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-16 00:44:09.031858 | orchestrator | Monday 16 March 2026 00:44:04 +0000 (0:00:00.583) 0:00:24.001 ********** 2026-03-16 00:44:09.031866 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:44:09.031875 | orchestrator | 2026-03-16 00:44:09.031928 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-16 00:44:09.031937 | orchestrator | Monday 16 March 2026 00:44:04 +0000 (0:00:00.560) 0:00:24.562 ********** 2026-03-16 00:44:09.031946 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:44:09.031955 | orchestrator | 2026-03-16 00:44:09.031963 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-16 00:44:09.031972 | orchestrator | Monday 16 March 2026 00:44:04 +0000 (0:00:00.172) 0:00:24.734 ********** 2026-03-16 00:44:09.031981 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'vg_name': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'}) 2026-03-16 00:44:09.031995 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'vg_name': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'}) 2026-03-16 00:44:09.032004 | orchestrator | 2026-03-16 00:44:09.032013 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-16 00:44:09.032021 | orchestrator | Monday 16 March 2026 00:44:05 +0000 (0:00:00.198) 0:00:24.932 ********** 2026-03-16 00:44:09.032030 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:44:09.032062 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:44:09.032071 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:09.032080 | orchestrator | 2026-03-16 00:44:09.032089 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-16 00:44:09.032097 | orchestrator | Monday 16 March 2026 00:44:05 +0000 (0:00:00.444) 0:00:25.377 ********** 2026-03-16 00:44:09.032106 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:44:09.032115 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:44:09.032124 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:09.032132 | orchestrator | 2026-03-16 00:44:09.032142 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-16 00:44:09.032150 | orchestrator | Monday 16 March 2026 00:44:05 +0000 (0:00:00.183) 0:00:25.561 ********** 2026-03-16 00:44:09.032159 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'})  2026-03-16 00:44:09.032168 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'})  2026-03-16 00:44:09.032177 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:44:09.032185 | orchestrator | 2026-03-16 00:44:09.032194 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-16 00:44:09.032203 | orchestrator | Monday 16 March 2026 00:44:05 +0000 (0:00:00.164) 0:00:25.725 ********** 2026-03-16 00:44:09.032227 | orchestrator | ok: [testbed-node-3] => { 2026-03-16 00:44:09.032237 | orchestrator |  "lvm_report": { 2026-03-16 00:44:09.032246 | orchestrator |  "lv": [ 2026-03-16 00:44:09.032255 | orchestrator |  { 2026-03-16 00:44:09.032264 | orchestrator |  "lv_name": "osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365", 2026-03-16 00:44:09.032273 | orchestrator |  "vg_name": "ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365" 2026-03-16 00:44:09.032282 | orchestrator |  }, 2026-03-16 00:44:09.032290 | orchestrator |  { 2026-03-16 00:44:09.032299 | orchestrator |  "lv_name": "osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784", 2026-03-16 00:44:09.032308 | orchestrator |  "vg_name": "ceph-71e0430a-6bf1-53ec-905e-7c884e89f784" 2026-03-16 00:44:09.032317 | orchestrator |  } 2026-03-16 00:44:09.032325 | orchestrator |  ], 2026-03-16 00:44:09.032334 | orchestrator |  "pv": [ 2026-03-16 00:44:09.032343 | orchestrator |  { 2026-03-16 00:44:09.032351 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-16 00:44:09.032360 | orchestrator |  "vg_name": "ceph-71e0430a-6bf1-53ec-905e-7c884e89f784" 2026-03-16 00:44:09.032369 | orchestrator |  }, 2026-03-16 00:44:09.032378 | orchestrator |  { 2026-03-16 00:44:09.032386 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-16 00:44:09.032395 | orchestrator |  "vg_name": "ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365" 2026-03-16 00:44:09.032404 | orchestrator |  } 2026-03-16 00:44:09.032412 | orchestrator |  ] 2026-03-16 00:44:09.032421 | orchestrator |  } 2026-03-16 00:44:09.032430 | orchestrator | } 2026-03-16 00:44:09.032439 | orchestrator | 2026-03-16 00:44:09.032448 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-16 00:44:09.032456 | orchestrator | 2026-03-16 00:44:09.032465 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-16 00:44:09.032474 | orchestrator | Monday 16 March 2026 00:44:06 +0000 (0:00:00.309) 0:00:26.035 ********** 2026-03-16 00:44:09.032490 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-16 00:44:09.032498 | orchestrator | 2026-03-16 00:44:09.032507 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-16 00:44:09.032516 | orchestrator | Monday 16 March 2026 00:44:06 +0000 (0:00:00.278) 0:00:26.313 ********** 2026-03-16 00:44:09.032525 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:44:09.032533 | orchestrator | 2026-03-16 00:44:09.032542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:09.032551 | orchestrator | Monday 16 March 2026 00:44:06 +0000 (0:00:00.234) 0:00:26.548 ********** 2026-03-16 00:44:09.032560 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-16 00:44:09.032569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-16 00:44:09.032577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-16 00:44:09.032586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-16 00:44:09.032594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-16 00:44:09.032603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-16 00:44:09.032611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-16 00:44:09.032625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-16 00:44:09.032634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-16 00:44:09.032642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-16 00:44:09.032651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-16 00:44:09.032659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-16 00:44:09.032668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-16 00:44:09.032677 | orchestrator | 2026-03-16 00:44:09.032686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:09.032694 | orchestrator | Monday 16 March 2026 00:44:07 +0000 (0:00:00.482) 0:00:27.030 ********** 2026-03-16 00:44:09.032703 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:09.032712 | orchestrator | 2026-03-16 00:44:09.032720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:09.032729 | orchestrator | Monday 16 March 2026 00:44:07 +0000 (0:00:00.207) 0:00:27.237 ********** 2026-03-16 00:44:09.032738 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:09.032746 | orchestrator | 2026-03-16 00:44:09.032755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:09.032764 | orchestrator | Monday 16 March 2026 00:44:07 +0000 (0:00:00.201) 0:00:27.438 ********** 2026-03-16 00:44:09.032772 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:09.032781 | orchestrator | 2026-03-16 00:44:09.032790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:09.032799 | orchestrator | Monday 16 March 2026 00:44:08 +0000 (0:00:00.712) 0:00:28.151 ********** 2026-03-16 00:44:09.032807 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:09.032816 | orchestrator | 2026-03-16 00:44:09.032825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:09.032833 | orchestrator | Monday 16 March 2026 00:44:08 +0000 (0:00:00.224) 0:00:28.375 ********** 2026-03-16 00:44:09.032842 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:09.032851 | orchestrator | 2026-03-16 00:44:09.032859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:09.032868 | orchestrator | Monday 16 March 2026 00:44:08 +0000 (0:00:00.211) 0:00:28.586 ********** 2026-03-16 00:44:09.032897 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:09.032906 | orchestrator | 2026-03-16 00:44:09.032921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:20.638720 | orchestrator | Monday 16 March 2026 00:44:09 +0000 (0:00:00.199) 0:00:28.786 ********** 2026-03-16 00:44:20.638931 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.638966 | orchestrator | 2026-03-16 00:44:20.638987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:20.639006 | orchestrator | Monday 16 March 2026 00:44:09 +0000 (0:00:00.204) 0:00:28.991 ********** 2026-03-16 00:44:20.639025 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.639045 | orchestrator | 2026-03-16 00:44:20.639063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:20.639083 | orchestrator | Monday 16 March 2026 00:44:09 +0000 (0:00:00.202) 0:00:29.194 ********** 2026-03-16 00:44:20.639101 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5) 2026-03-16 00:44:20.639120 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5) 2026-03-16 00:44:20.639138 | orchestrator | 2026-03-16 00:44:20.639156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:20.639175 | orchestrator | Monday 16 March 2026 00:44:09 +0000 (0:00:00.434) 0:00:29.629 ********** 2026-03-16 00:44:20.639194 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358) 2026-03-16 00:44:20.639214 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358) 2026-03-16 00:44:20.639233 | orchestrator | 2026-03-16 00:44:20.639252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:20.639272 | orchestrator | Monday 16 March 2026 00:44:10 +0000 (0:00:00.446) 0:00:30.076 ********** 2026-03-16 00:44:20.639290 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649) 2026-03-16 00:44:20.639309 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649) 2026-03-16 00:44:20.639329 | orchestrator | 2026-03-16 00:44:20.639347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:20.639366 | orchestrator | Monday 16 March 2026 00:44:10 +0000 (0:00:00.446) 0:00:30.522 ********** 2026-03-16 00:44:20.639380 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22) 2026-03-16 00:44:20.639391 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22) 2026-03-16 00:44:20.639402 | orchestrator | 2026-03-16 00:44:20.639413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:20.639424 | orchestrator | Monday 16 March 2026 00:44:11 +0000 (0:00:00.727) 0:00:31.250 ********** 2026-03-16 00:44:20.639435 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-16 00:44:20.639446 | orchestrator | 2026-03-16 00:44:20.639457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.639467 | orchestrator | Monday 16 March 2026 00:44:12 +0000 (0:00:00.648) 0:00:31.898 ********** 2026-03-16 00:44:20.639478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-16 00:44:20.639489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-16 00:44:20.639500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-16 00:44:20.639511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-16 00:44:20.639521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-16 00:44:20.639532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-16 00:44:20.639574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-16 00:44:20.639585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-16 00:44:20.639596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-16 00:44:20.639607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-16 00:44:20.639617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-16 00:44:20.639628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-16 00:44:20.639639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-16 00:44:20.639649 | orchestrator | 2026-03-16 00:44:20.639660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.639671 | orchestrator | Monday 16 March 2026 00:44:12 +0000 (0:00:00.855) 0:00:32.754 ********** 2026-03-16 00:44:20.639682 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.639692 | orchestrator | 2026-03-16 00:44:20.639703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.639734 | orchestrator | Monday 16 March 2026 00:44:13 +0000 (0:00:00.193) 0:00:32.947 ********** 2026-03-16 00:44:20.639745 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.639756 | orchestrator | 2026-03-16 00:44:20.639767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.639777 | orchestrator | Monday 16 March 2026 00:44:13 +0000 (0:00:00.207) 0:00:33.155 ********** 2026-03-16 00:44:20.639788 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.639799 | orchestrator | 2026-03-16 00:44:20.639831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.639843 | orchestrator | Monday 16 March 2026 00:44:13 +0000 (0:00:00.222) 0:00:33.378 ********** 2026-03-16 00:44:20.639854 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.639865 | orchestrator | 2026-03-16 00:44:20.639900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.639911 | orchestrator | Monday 16 March 2026 00:44:13 +0000 (0:00:00.195) 0:00:33.574 ********** 2026-03-16 00:44:20.639922 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.639932 | orchestrator | 2026-03-16 00:44:20.639943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.639954 | orchestrator | Monday 16 March 2026 00:44:14 +0000 (0:00:00.230) 0:00:33.804 ********** 2026-03-16 00:44:20.639964 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.639975 | orchestrator | 2026-03-16 00:44:20.639986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.639997 | orchestrator | Monday 16 March 2026 00:44:14 +0000 (0:00:00.217) 0:00:34.021 ********** 2026-03-16 00:44:20.640007 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.640018 | orchestrator | 2026-03-16 00:44:20.640029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.640039 | orchestrator | Monday 16 March 2026 00:44:14 +0000 (0:00:00.199) 0:00:34.221 ********** 2026-03-16 00:44:20.640050 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.640061 | orchestrator | 2026-03-16 00:44:20.640072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.640082 | orchestrator | Monday 16 March 2026 00:44:14 +0000 (0:00:00.200) 0:00:34.421 ********** 2026-03-16 00:44:20.640093 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-16 00:44:20.640104 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-16 00:44:20.640115 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-16 00:44:20.640126 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-16 00:44:20.640137 | orchestrator | 2026-03-16 00:44:20.640148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.640168 | orchestrator | Monday 16 March 2026 00:44:15 +0000 (0:00:01.065) 0:00:35.487 ********** 2026-03-16 00:44:20.640179 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.640189 | orchestrator | 2026-03-16 00:44:20.640200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.640211 | orchestrator | Monday 16 March 2026 00:44:15 +0000 (0:00:00.203) 0:00:35.691 ********** 2026-03-16 00:44:20.640222 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.640232 | orchestrator | 2026-03-16 00:44:20.640243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.640254 | orchestrator | Monday 16 March 2026 00:44:16 +0000 (0:00:00.559) 0:00:36.250 ********** 2026-03-16 00:44:20.640265 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.640275 | orchestrator | 2026-03-16 00:44:20.640286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:20.640297 | orchestrator | Monday 16 March 2026 00:44:16 +0000 (0:00:00.197) 0:00:36.447 ********** 2026-03-16 00:44:20.640307 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.640318 | orchestrator | 2026-03-16 00:44:20.640329 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-16 00:44:20.640345 | orchestrator | Monday 16 March 2026 00:44:16 +0000 (0:00:00.177) 0:00:36.624 ********** 2026-03-16 00:44:20.640356 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.640366 | orchestrator | 2026-03-16 00:44:20.640377 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-16 00:44:20.640388 | orchestrator | Monday 16 March 2026 00:44:16 +0000 (0:00:00.117) 0:00:36.742 ********** 2026-03-16 00:44:20.640399 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ded6401a-969b-5c16-b1be-1b69fe43ded8'}}) 2026-03-16 00:44:20.640410 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '01ad088d-533b-5bd8-92eb-284afc0ad32d'}}) 2026-03-16 00:44:20.640421 | orchestrator | 2026-03-16 00:44:20.640432 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-16 00:44:20.640443 | orchestrator | Monday 16 March 2026 00:44:17 +0000 (0:00:00.170) 0:00:36.913 ********** 2026-03-16 00:44:20.640454 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'}) 2026-03-16 00:44:20.640466 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'}) 2026-03-16 00:44:20.640476 | orchestrator | 2026-03-16 00:44:20.640487 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-16 00:44:20.640498 | orchestrator | Monday 16 March 2026 00:44:19 +0000 (0:00:01.938) 0:00:38.852 ********** 2026-03-16 00:44:20.640509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:20.640521 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:20.640532 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:20.640543 | orchestrator | 2026-03-16 00:44:20.640553 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-16 00:44:20.640564 | orchestrator | Monday 16 March 2026 00:44:19 +0000 (0:00:00.159) 0:00:39.012 ********** 2026-03-16 00:44:20.640575 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'}) 2026-03-16 00:44:20.640593 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'}) 2026-03-16 00:44:26.695462 | orchestrator | 2026-03-16 00:44:26.695546 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-16 00:44:26.695569 | orchestrator | Monday 16 March 2026 00:44:20 +0000 (0:00:01.377) 0:00:40.390 ********** 2026-03-16 00:44:26.695574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:26.695581 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:26.695586 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.695592 | orchestrator | 2026-03-16 00:44:26.695597 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-16 00:44:26.695601 | orchestrator | Monday 16 March 2026 00:44:20 +0000 (0:00:00.168) 0:00:40.558 ********** 2026-03-16 00:44:26.695606 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.695611 | orchestrator | 2026-03-16 00:44:26.695616 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-16 00:44:26.695621 | orchestrator | Monday 16 March 2026 00:44:20 +0000 (0:00:00.140) 0:00:40.698 ********** 2026-03-16 00:44:26.695625 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:26.695630 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:26.695635 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.695639 | orchestrator | 2026-03-16 00:44:26.695644 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-16 00:44:26.695648 | orchestrator | Monday 16 March 2026 00:44:21 +0000 (0:00:00.152) 0:00:40.851 ********** 2026-03-16 00:44:26.695653 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.695658 | orchestrator | 2026-03-16 00:44:26.695662 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-16 00:44:26.695667 | orchestrator | Monday 16 March 2026 00:44:21 +0000 (0:00:00.137) 0:00:40.989 ********** 2026-03-16 00:44:26.695671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:26.695676 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:26.695680 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.695685 | orchestrator | 2026-03-16 00:44:26.695689 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-16 00:44:26.695704 | orchestrator | Monday 16 March 2026 00:44:21 +0000 (0:00:00.529) 0:00:41.518 ********** 2026-03-16 00:44:26.695709 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.695713 | orchestrator | 2026-03-16 00:44:26.695718 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-16 00:44:26.695722 | orchestrator | Monday 16 March 2026 00:44:21 +0000 (0:00:00.138) 0:00:41.656 ********** 2026-03-16 00:44:26.695727 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:26.695732 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:26.695736 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.695741 | orchestrator | 2026-03-16 00:44:26.695745 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-16 00:44:26.695750 | orchestrator | Monday 16 March 2026 00:44:22 +0000 (0:00:00.210) 0:00:41.867 ********** 2026-03-16 00:44:26.695754 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:44:26.695759 | orchestrator | 2026-03-16 00:44:26.695764 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-16 00:44:26.695775 | orchestrator | Monday 16 March 2026 00:44:22 +0000 (0:00:00.177) 0:00:42.044 ********** 2026-03-16 00:44:26.695780 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:26.695784 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:26.695789 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.695793 | orchestrator | 2026-03-16 00:44:26.695798 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-16 00:44:26.695802 | orchestrator | Monday 16 March 2026 00:44:22 +0000 (0:00:00.155) 0:00:42.200 ********** 2026-03-16 00:44:26.695807 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:26.695811 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:26.695816 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.695820 | orchestrator | 2026-03-16 00:44:26.695825 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-16 00:44:26.695840 | orchestrator | Monday 16 March 2026 00:44:22 +0000 (0:00:00.150) 0:00:42.350 ********** 2026-03-16 00:44:26.695846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:26.695850 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:26.695855 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.695934 | orchestrator | 2026-03-16 00:44:26.695949 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-16 00:44:26.695956 | orchestrator | Monday 16 March 2026 00:44:22 +0000 (0:00:00.167) 0:00:42.518 ********** 2026-03-16 00:44:26.695963 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.695970 | orchestrator | 2026-03-16 00:44:26.695977 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-16 00:44:26.695985 | orchestrator | Monday 16 March 2026 00:44:22 +0000 (0:00:00.184) 0:00:42.702 ********** 2026-03-16 00:44:26.695992 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.696000 | orchestrator | 2026-03-16 00:44:26.696007 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-16 00:44:26.696014 | orchestrator | Monday 16 March 2026 00:44:23 +0000 (0:00:00.148) 0:00:42.851 ********** 2026-03-16 00:44:26.696021 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.696028 | orchestrator | 2026-03-16 00:44:26.696033 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-16 00:44:26.696037 | orchestrator | Monday 16 March 2026 00:44:23 +0000 (0:00:00.138) 0:00:42.989 ********** 2026-03-16 00:44:26.696042 | orchestrator | ok: [testbed-node-4] => { 2026-03-16 00:44:26.696047 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-16 00:44:26.696051 | orchestrator | } 2026-03-16 00:44:26.696056 | orchestrator | 2026-03-16 00:44:26.696061 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-16 00:44:26.696065 | orchestrator | Monday 16 March 2026 00:44:23 +0000 (0:00:00.146) 0:00:43.136 ********** 2026-03-16 00:44:26.696070 | orchestrator | ok: [testbed-node-4] => { 2026-03-16 00:44:26.696074 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-16 00:44:26.696079 | orchestrator | } 2026-03-16 00:44:26.696083 | orchestrator | 2026-03-16 00:44:26.696088 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-16 00:44:26.696092 | orchestrator | Monday 16 March 2026 00:44:23 +0000 (0:00:00.157) 0:00:43.294 ********** 2026-03-16 00:44:26.696102 | orchestrator | ok: [testbed-node-4] => { 2026-03-16 00:44:26.696107 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-16 00:44:26.696111 | orchestrator | } 2026-03-16 00:44:26.696116 | orchestrator | 2026-03-16 00:44:26.696120 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-16 00:44:26.696125 | orchestrator | Monday 16 March 2026 00:44:23 +0000 (0:00:00.422) 0:00:43.716 ********** 2026-03-16 00:44:26.696129 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:44:26.696134 | orchestrator | 2026-03-16 00:44:26.696138 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-16 00:44:26.696143 | orchestrator | Monday 16 March 2026 00:44:24 +0000 (0:00:00.516) 0:00:44.233 ********** 2026-03-16 00:44:26.696147 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:44:26.696154 | orchestrator | 2026-03-16 00:44:26.696161 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-16 00:44:26.696172 | orchestrator | Monday 16 March 2026 00:44:25 +0000 (0:00:00.647) 0:00:44.880 ********** 2026-03-16 00:44:26.696181 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:44:26.696189 | orchestrator | 2026-03-16 00:44:26.696196 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-16 00:44:26.696203 | orchestrator | Monday 16 March 2026 00:44:25 +0000 (0:00:00.496) 0:00:45.376 ********** 2026-03-16 00:44:26.696210 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:44:26.696216 | orchestrator | 2026-03-16 00:44:26.696222 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-16 00:44:26.696230 | orchestrator | Monday 16 March 2026 00:44:25 +0000 (0:00:00.146) 0:00:45.523 ********** 2026-03-16 00:44:26.696237 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.696244 | orchestrator | 2026-03-16 00:44:26.696252 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-16 00:44:26.696260 | orchestrator | Monday 16 March 2026 00:44:25 +0000 (0:00:00.113) 0:00:45.636 ********** 2026-03-16 00:44:26.696267 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.696274 | orchestrator | 2026-03-16 00:44:26.696281 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-16 00:44:26.696288 | orchestrator | Monday 16 March 2026 00:44:26 +0000 (0:00:00.124) 0:00:45.760 ********** 2026-03-16 00:44:26.696295 | orchestrator | ok: [testbed-node-4] => { 2026-03-16 00:44:26.696299 | orchestrator |  "vgs_report": { 2026-03-16 00:44:26.696304 | orchestrator |  "vg": [] 2026-03-16 00:44:26.696309 | orchestrator |  } 2026-03-16 00:44:26.696313 | orchestrator | } 2026-03-16 00:44:26.696318 | orchestrator | 2026-03-16 00:44:26.696322 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-16 00:44:26.696327 | orchestrator | Monday 16 March 2026 00:44:26 +0000 (0:00:00.166) 0:00:45.927 ********** 2026-03-16 00:44:26.696331 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.696336 | orchestrator | 2026-03-16 00:44:26.696340 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-16 00:44:26.696345 | orchestrator | Monday 16 March 2026 00:44:26 +0000 (0:00:00.129) 0:00:46.056 ********** 2026-03-16 00:44:26.696349 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.696354 | orchestrator | 2026-03-16 00:44:26.696361 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-16 00:44:26.696370 | orchestrator | Monday 16 March 2026 00:44:26 +0000 (0:00:00.126) 0:00:46.183 ********** 2026-03-16 00:44:26.696380 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.696387 | orchestrator | 2026-03-16 00:44:26.696394 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-16 00:44:26.696410 | orchestrator | Monday 16 March 2026 00:44:26 +0000 (0:00:00.136) 0:00:46.319 ********** 2026-03-16 00:44:26.696416 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:26.696423 | orchestrator | 2026-03-16 00:44:26.696436 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-16 00:44:31.142910 | orchestrator | Monday 16 March 2026 00:44:26 +0000 (0:00:00.132) 0:00:46.451 ********** 2026-03-16 00:44:31.143009 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143018 | orchestrator | 2026-03-16 00:44:31.143025 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-16 00:44:31.143031 | orchestrator | Monday 16 March 2026 00:44:26 +0000 (0:00:00.301) 0:00:46.753 ********** 2026-03-16 00:44:31.143037 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143043 | orchestrator | 2026-03-16 00:44:31.143049 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-16 00:44:31.143055 | orchestrator | Monday 16 March 2026 00:44:27 +0000 (0:00:00.122) 0:00:46.876 ********** 2026-03-16 00:44:31.143061 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143066 | orchestrator | 2026-03-16 00:44:31.143072 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-16 00:44:31.143078 | orchestrator | Monday 16 March 2026 00:44:27 +0000 (0:00:00.140) 0:00:47.016 ********** 2026-03-16 00:44:31.143084 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143089 | orchestrator | 2026-03-16 00:44:31.143095 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-16 00:44:31.143101 | orchestrator | Monday 16 March 2026 00:44:27 +0000 (0:00:00.126) 0:00:47.143 ********** 2026-03-16 00:44:31.143107 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143112 | orchestrator | 2026-03-16 00:44:31.143118 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-16 00:44:31.143124 | orchestrator | Monday 16 March 2026 00:44:27 +0000 (0:00:00.132) 0:00:47.276 ********** 2026-03-16 00:44:31.143129 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143135 | orchestrator | 2026-03-16 00:44:31.143141 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-16 00:44:31.143147 | orchestrator | Monday 16 March 2026 00:44:27 +0000 (0:00:00.135) 0:00:47.412 ********** 2026-03-16 00:44:31.143152 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143158 | orchestrator | 2026-03-16 00:44:31.143164 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-16 00:44:31.143169 | orchestrator | Monday 16 March 2026 00:44:27 +0000 (0:00:00.137) 0:00:47.549 ********** 2026-03-16 00:44:31.143175 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143181 | orchestrator | 2026-03-16 00:44:31.143187 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-16 00:44:31.143192 | orchestrator | Monday 16 March 2026 00:44:27 +0000 (0:00:00.127) 0:00:47.677 ********** 2026-03-16 00:44:31.143198 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143204 | orchestrator | 2026-03-16 00:44:31.143210 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-16 00:44:31.143215 | orchestrator | Monday 16 March 2026 00:44:28 +0000 (0:00:00.129) 0:00:47.806 ********** 2026-03-16 00:44:31.143221 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143227 | orchestrator | 2026-03-16 00:44:31.143233 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-16 00:44:31.143249 | orchestrator | Monday 16 March 2026 00:44:28 +0000 (0:00:00.126) 0:00:47.933 ********** 2026-03-16 00:44:31.143257 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:31.143263 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:31.143269 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143275 | orchestrator | 2026-03-16 00:44:31.143281 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-16 00:44:31.143287 | orchestrator | Monday 16 March 2026 00:44:28 +0000 (0:00:00.144) 0:00:48.077 ********** 2026-03-16 00:44:31.143292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:31.143304 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:31.143310 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143316 | orchestrator | 2026-03-16 00:44:31.143322 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-16 00:44:31.143327 | orchestrator | Monday 16 March 2026 00:44:28 +0000 (0:00:00.149) 0:00:48.227 ********** 2026-03-16 00:44:31.143333 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:31.143339 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:31.143345 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143350 | orchestrator | 2026-03-16 00:44:31.143356 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-16 00:44:31.143362 | orchestrator | Monday 16 March 2026 00:44:28 +0000 (0:00:00.296) 0:00:48.524 ********** 2026-03-16 00:44:31.143368 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:31.143373 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:31.143379 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143385 | orchestrator | 2026-03-16 00:44:31.143403 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-16 00:44:31.143409 | orchestrator | Monday 16 March 2026 00:44:28 +0000 (0:00:00.149) 0:00:48.673 ********** 2026-03-16 00:44:31.143415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:31.143421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:31.143426 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143432 | orchestrator | 2026-03-16 00:44:31.143438 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-16 00:44:31.143454 | orchestrator | Monday 16 March 2026 00:44:29 +0000 (0:00:00.153) 0:00:48.827 ********** 2026-03-16 00:44:31.143460 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:31.143467 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:31.143479 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143485 | orchestrator | 2026-03-16 00:44:31.143491 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-16 00:44:31.143496 | orchestrator | Monday 16 March 2026 00:44:29 +0000 (0:00:00.145) 0:00:48.973 ********** 2026-03-16 00:44:31.143502 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:31.143508 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:31.143513 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143519 | orchestrator | 2026-03-16 00:44:31.143525 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-16 00:44:31.143530 | orchestrator | Monday 16 March 2026 00:44:29 +0000 (0:00:00.155) 0:00:49.128 ********** 2026-03-16 00:44:31.143536 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:31.143546 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:31.143555 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143561 | orchestrator | 2026-03-16 00:44:31.143567 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-16 00:44:31.143573 | orchestrator | Monday 16 March 2026 00:44:29 +0000 (0:00:00.141) 0:00:49.270 ********** 2026-03-16 00:44:31.143578 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:44:31.143584 | orchestrator | 2026-03-16 00:44:31.143590 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-16 00:44:31.143596 | orchestrator | Monday 16 March 2026 00:44:30 +0000 (0:00:00.517) 0:00:49.788 ********** 2026-03-16 00:44:31.143601 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:44:31.143607 | orchestrator | 2026-03-16 00:44:31.143613 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-16 00:44:31.143618 | orchestrator | Monday 16 March 2026 00:44:30 +0000 (0:00:00.524) 0:00:50.312 ********** 2026-03-16 00:44:31.143624 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:44:31.143629 | orchestrator | 2026-03-16 00:44:31.143635 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-16 00:44:31.143641 | orchestrator | Monday 16 March 2026 00:44:30 +0000 (0:00:00.142) 0:00:50.455 ********** 2026-03-16 00:44:31.143647 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'vg_name': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'}) 2026-03-16 00:44:31.143653 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'vg_name': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'}) 2026-03-16 00:44:31.143658 | orchestrator | 2026-03-16 00:44:31.143664 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-16 00:44:31.143670 | orchestrator | Monday 16 March 2026 00:44:30 +0000 (0:00:00.136) 0:00:50.591 ********** 2026-03-16 00:44:31.143676 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:31.143681 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:31.143687 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:31.143693 | orchestrator | 2026-03-16 00:44:31.143698 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-16 00:44:31.143704 | orchestrator | Monday 16 March 2026 00:44:30 +0000 (0:00:00.167) 0:00:50.758 ********** 2026-03-16 00:44:31.143710 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:31.143719 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:36.879620 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:36.879699 | orchestrator | 2026-03-16 00:44:36.879706 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-16 00:44:36.879712 | orchestrator | Monday 16 March 2026 00:44:31 +0000 (0:00:00.139) 0:00:50.898 ********** 2026-03-16 00:44:36.879717 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'})  2026-03-16 00:44:36.879724 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'})  2026-03-16 00:44:36.879728 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:44:36.879732 | orchestrator | 2026-03-16 00:44:36.879736 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-16 00:44:36.879754 | orchestrator | Monday 16 March 2026 00:44:31 +0000 (0:00:00.168) 0:00:51.066 ********** 2026-03-16 00:44:36.879758 | orchestrator | ok: [testbed-node-4] => { 2026-03-16 00:44:36.879763 | orchestrator |  "lvm_report": { 2026-03-16 00:44:36.879767 | orchestrator |  "lv": [ 2026-03-16 00:44:36.879771 | orchestrator |  { 2026-03-16 00:44:36.879776 | orchestrator |  "lv_name": "osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d", 2026-03-16 00:44:36.879792 | orchestrator |  "vg_name": "ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d" 2026-03-16 00:44:36.879796 | orchestrator |  }, 2026-03-16 00:44:36.879805 | orchestrator |  { 2026-03-16 00:44:36.879809 | orchestrator |  "lv_name": "osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8", 2026-03-16 00:44:36.879813 | orchestrator |  "vg_name": "ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8" 2026-03-16 00:44:36.879817 | orchestrator |  } 2026-03-16 00:44:36.879821 | orchestrator |  ], 2026-03-16 00:44:36.879825 | orchestrator |  "pv": [ 2026-03-16 00:44:36.879829 | orchestrator |  { 2026-03-16 00:44:36.879833 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-16 00:44:36.879837 | orchestrator |  "vg_name": "ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8" 2026-03-16 00:44:36.879841 | orchestrator |  }, 2026-03-16 00:44:36.879844 | orchestrator |  { 2026-03-16 00:44:36.879909 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-16 00:44:36.879915 | orchestrator |  "vg_name": "ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d" 2026-03-16 00:44:36.879919 | orchestrator |  } 2026-03-16 00:44:36.879924 | orchestrator |  ] 2026-03-16 00:44:36.879928 | orchestrator |  } 2026-03-16 00:44:36.879932 | orchestrator | } 2026-03-16 00:44:36.879937 | orchestrator | 2026-03-16 00:44:36.879941 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-16 00:44:36.879945 | orchestrator | 2026-03-16 00:44:36.879949 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-16 00:44:36.879953 | orchestrator | Monday 16 March 2026 00:44:31 +0000 (0:00:00.447) 0:00:51.514 ********** 2026-03-16 00:44:36.879958 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-16 00:44:36.879962 | orchestrator | 2026-03-16 00:44:36.879966 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-16 00:44:36.879971 | orchestrator | Monday 16 March 2026 00:44:31 +0000 (0:00:00.225) 0:00:51.740 ********** 2026-03-16 00:44:36.879975 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:44:36.879979 | orchestrator | 2026-03-16 00:44:36.879983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.879987 | orchestrator | Monday 16 March 2026 00:44:32 +0000 (0:00:00.218) 0:00:51.959 ********** 2026-03-16 00:44:36.879991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-16 00:44:36.879996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-16 00:44:36.880000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-16 00:44:36.880003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-16 00:44:36.880007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-16 00:44:36.880011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-16 00:44:36.880015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-16 00:44:36.880019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-16 00:44:36.880023 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-16 00:44:36.880027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-16 00:44:36.880036 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-16 00:44:36.880040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-16 00:44:36.880044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-16 00:44:36.880047 | orchestrator | 2026-03-16 00:44:36.880052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880058 | orchestrator | Monday 16 March 2026 00:44:32 +0000 (0:00:00.392) 0:00:52.351 ********** 2026-03-16 00:44:36.880062 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:36.880066 | orchestrator | 2026-03-16 00:44:36.880070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880074 | orchestrator | Monday 16 March 2026 00:44:32 +0000 (0:00:00.186) 0:00:52.538 ********** 2026-03-16 00:44:36.880079 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:36.880083 | orchestrator | 2026-03-16 00:44:36.880087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880105 | orchestrator | Monday 16 March 2026 00:44:32 +0000 (0:00:00.181) 0:00:52.719 ********** 2026-03-16 00:44:36.880109 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:36.880113 | orchestrator | 2026-03-16 00:44:36.880117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880121 | orchestrator | Monday 16 March 2026 00:44:33 +0000 (0:00:00.183) 0:00:52.903 ********** 2026-03-16 00:44:36.880125 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:36.880129 | orchestrator | 2026-03-16 00:44:36.880133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880137 | orchestrator | Monday 16 March 2026 00:44:33 +0000 (0:00:00.184) 0:00:53.087 ********** 2026-03-16 00:44:36.880141 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:36.880145 | orchestrator | 2026-03-16 00:44:36.880149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880153 | orchestrator | Monday 16 March 2026 00:44:33 +0000 (0:00:00.564) 0:00:53.652 ********** 2026-03-16 00:44:36.880157 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:36.880161 | orchestrator | 2026-03-16 00:44:36.880166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880170 | orchestrator | Monday 16 March 2026 00:44:34 +0000 (0:00:00.185) 0:00:53.838 ********** 2026-03-16 00:44:36.880175 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:36.880179 | orchestrator | 2026-03-16 00:44:36.880184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880189 | orchestrator | Monday 16 March 2026 00:44:34 +0000 (0:00:00.189) 0:00:54.027 ********** 2026-03-16 00:44:36.880193 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:36.880198 | orchestrator | 2026-03-16 00:44:36.880202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880207 | orchestrator | Monday 16 March 2026 00:44:34 +0000 (0:00:00.187) 0:00:54.214 ********** 2026-03-16 00:44:36.880212 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055) 2026-03-16 00:44:36.880218 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055) 2026-03-16 00:44:36.880222 | orchestrator | 2026-03-16 00:44:36.880227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880231 | orchestrator | Monday 16 March 2026 00:44:34 +0000 (0:00:00.394) 0:00:54.608 ********** 2026-03-16 00:44:36.880267 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096) 2026-03-16 00:44:36.880273 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096) 2026-03-16 00:44:36.880277 | orchestrator | 2026-03-16 00:44:36.880282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880292 | orchestrator | Monday 16 March 2026 00:44:35 +0000 (0:00:00.397) 0:00:55.006 ********** 2026-03-16 00:44:36.880297 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a) 2026-03-16 00:44:36.880301 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a) 2026-03-16 00:44:36.880305 | orchestrator | 2026-03-16 00:44:36.880310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880314 | orchestrator | Monday 16 March 2026 00:44:35 +0000 (0:00:00.411) 0:00:55.417 ********** 2026-03-16 00:44:36.880319 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7) 2026-03-16 00:44:36.880323 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7) 2026-03-16 00:44:36.880328 | orchestrator | 2026-03-16 00:44:36.880332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-16 00:44:36.880337 | orchestrator | Monday 16 March 2026 00:44:36 +0000 (0:00:00.411) 0:00:55.829 ********** 2026-03-16 00:44:36.880341 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-16 00:44:36.880345 | orchestrator | 2026-03-16 00:44:36.880350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:36.880354 | orchestrator | Monday 16 March 2026 00:44:36 +0000 (0:00:00.408) 0:00:56.238 ********** 2026-03-16 00:44:36.880359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-16 00:44:36.880363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-16 00:44:36.880367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-16 00:44:36.880372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-16 00:44:36.880376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-16 00:44:36.880380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-16 00:44:36.880385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-16 00:44:36.880389 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-16 00:44:36.880394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-16 00:44:36.880398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-16 00:44:36.880402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-16 00:44:36.880411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-16 00:44:45.508600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-16 00:44:45.508707 | orchestrator | 2026-03-16 00:44:45.508724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.508736 | orchestrator | Monday 16 March 2026 00:44:36 +0000 (0:00:00.389) 0:00:56.628 ********** 2026-03-16 00:44:45.508748 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.508760 | orchestrator | 2026-03-16 00:44:45.508771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.508782 | orchestrator | Monday 16 March 2026 00:44:37 +0000 (0:00:00.188) 0:00:56.816 ********** 2026-03-16 00:44:45.508793 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.508804 | orchestrator | 2026-03-16 00:44:45.508815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.508826 | orchestrator | Monday 16 March 2026 00:44:37 +0000 (0:00:00.592) 0:00:57.409 ********** 2026-03-16 00:44:45.508892 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.508929 | orchestrator | 2026-03-16 00:44:45.508941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.508952 | orchestrator | Monday 16 March 2026 00:44:37 +0000 (0:00:00.193) 0:00:57.603 ********** 2026-03-16 00:44:45.508963 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.508973 | orchestrator | 2026-03-16 00:44:45.508984 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.508995 | orchestrator | Monday 16 March 2026 00:44:38 +0000 (0:00:00.188) 0:00:57.791 ********** 2026-03-16 00:44:45.509006 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.509017 | orchestrator | 2026-03-16 00:44:45.509028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.509039 | orchestrator | Monday 16 March 2026 00:44:38 +0000 (0:00:00.196) 0:00:57.987 ********** 2026-03-16 00:44:45.509050 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.509061 | orchestrator | 2026-03-16 00:44:45.509071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.509082 | orchestrator | Monday 16 March 2026 00:44:38 +0000 (0:00:00.191) 0:00:58.179 ********** 2026-03-16 00:44:45.509093 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.509104 | orchestrator | 2026-03-16 00:44:45.509115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.509126 | orchestrator | Monday 16 March 2026 00:44:38 +0000 (0:00:00.197) 0:00:58.376 ********** 2026-03-16 00:44:45.509143 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.509164 | orchestrator | 2026-03-16 00:44:45.509185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.509204 | orchestrator | Monday 16 March 2026 00:44:38 +0000 (0:00:00.202) 0:00:58.578 ********** 2026-03-16 00:44:45.509223 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-16 00:44:45.509261 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-16 00:44:45.509282 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-16 00:44:45.509301 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-16 00:44:45.509322 | orchestrator | 2026-03-16 00:44:45.509343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.509365 | orchestrator | Monday 16 March 2026 00:44:39 +0000 (0:00:00.604) 0:00:59.183 ********** 2026-03-16 00:44:45.509385 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.509406 | orchestrator | 2026-03-16 00:44:45.509427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.509442 | orchestrator | Monday 16 March 2026 00:44:39 +0000 (0:00:00.194) 0:00:59.378 ********** 2026-03-16 00:44:45.509454 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.509465 | orchestrator | 2026-03-16 00:44:45.509476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.509487 | orchestrator | Monday 16 March 2026 00:44:39 +0000 (0:00:00.205) 0:00:59.583 ********** 2026-03-16 00:44:45.509498 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.509509 | orchestrator | 2026-03-16 00:44:45.509520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-16 00:44:45.509530 | orchestrator | Monday 16 March 2026 00:44:40 +0000 (0:00:00.175) 0:00:59.759 ********** 2026-03-16 00:44:45.509541 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.509552 | orchestrator | 2026-03-16 00:44:45.509563 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-16 00:44:45.509574 | orchestrator | Monday 16 March 2026 00:44:40 +0000 (0:00:00.222) 0:00:59.981 ********** 2026-03-16 00:44:45.509584 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.509595 | orchestrator | 2026-03-16 00:44:45.509606 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-16 00:44:45.509617 | orchestrator | Monday 16 March 2026 00:44:40 +0000 (0:00:00.279) 0:01:00.261 ********** 2026-03-16 00:44:45.509628 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '20eacd0a-f744-531e-8511-c5afb936ef86'}}) 2026-03-16 00:44:45.509652 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c2695ca6-70a1-5c1a-b7de-886954e6bf07'}}) 2026-03-16 00:44:45.509663 | orchestrator | 2026-03-16 00:44:45.509674 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-16 00:44:45.509684 | orchestrator | Monday 16 March 2026 00:44:40 +0000 (0:00:00.198) 0:01:00.460 ********** 2026-03-16 00:44:45.509696 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'}) 2026-03-16 00:44:45.509709 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'}) 2026-03-16 00:44:45.509720 | orchestrator | 2026-03-16 00:44:45.509731 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-16 00:44:45.509761 | orchestrator | Monday 16 March 2026 00:44:42 +0000 (0:00:01.994) 0:01:02.454 ********** 2026-03-16 00:44:45.509772 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:45.509785 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:45.509796 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.509807 | orchestrator | 2026-03-16 00:44:45.509817 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-16 00:44:45.509828 | orchestrator | Monday 16 March 2026 00:44:42 +0000 (0:00:00.145) 0:01:02.600 ********** 2026-03-16 00:44:45.509903 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'}) 2026-03-16 00:44:45.509918 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'}) 2026-03-16 00:44:45.509929 | orchestrator | 2026-03-16 00:44:45.509940 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-16 00:44:45.509951 | orchestrator | Monday 16 March 2026 00:44:44 +0000 (0:00:01.391) 0:01:03.992 ********** 2026-03-16 00:44:45.509962 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:45.509973 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:45.509984 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.509994 | orchestrator | 2026-03-16 00:44:45.510005 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-16 00:44:45.510073 | orchestrator | Monday 16 March 2026 00:44:44 +0000 (0:00:00.127) 0:01:04.119 ********** 2026-03-16 00:44:45.510088 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.510099 | orchestrator | 2026-03-16 00:44:45.510110 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-16 00:44:45.510121 | orchestrator | Monday 16 March 2026 00:44:44 +0000 (0:00:00.125) 0:01:04.244 ********** 2026-03-16 00:44:45.510131 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:45.510150 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:45.510161 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.510172 | orchestrator | 2026-03-16 00:44:45.510183 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-16 00:44:45.510193 | orchestrator | Monday 16 March 2026 00:44:44 +0000 (0:00:00.122) 0:01:04.366 ********** 2026-03-16 00:44:45.510213 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.510223 | orchestrator | 2026-03-16 00:44:45.510234 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-16 00:44:45.510245 | orchestrator | Monday 16 March 2026 00:44:44 +0000 (0:00:00.131) 0:01:04.498 ********** 2026-03-16 00:44:45.510256 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:45.510267 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:45.510278 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.510288 | orchestrator | 2026-03-16 00:44:45.510299 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-16 00:44:45.510310 | orchestrator | Monday 16 March 2026 00:44:44 +0000 (0:00:00.126) 0:01:04.625 ********** 2026-03-16 00:44:45.510321 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.510332 | orchestrator | 2026-03-16 00:44:45.510351 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-16 00:44:45.510369 | orchestrator | Monday 16 March 2026 00:44:44 +0000 (0:00:00.116) 0:01:04.741 ********** 2026-03-16 00:44:45.510388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:45.510405 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:45.510424 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:45.510441 | orchestrator | 2026-03-16 00:44:45.510458 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-16 00:44:45.510475 | orchestrator | Monday 16 March 2026 00:44:45 +0000 (0:00:00.129) 0:01:04.871 ********** 2026-03-16 00:44:45.510491 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:44:45.510508 | orchestrator | 2026-03-16 00:44:45.510526 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-16 00:44:45.510545 | orchestrator | Monday 16 March 2026 00:44:45 +0000 (0:00:00.268) 0:01:05.139 ********** 2026-03-16 00:44:45.510578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:51.109451 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:51.109561 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.109579 | orchestrator | 2026-03-16 00:44:51.109592 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-16 00:44:51.109605 | orchestrator | Monday 16 March 2026 00:44:45 +0000 (0:00:00.124) 0:01:05.263 ********** 2026-03-16 00:44:51.109617 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:51.109629 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:51.109640 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.109651 | orchestrator | 2026-03-16 00:44:51.109662 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-16 00:44:51.109674 | orchestrator | Monday 16 March 2026 00:44:45 +0000 (0:00:00.112) 0:01:05.376 ********** 2026-03-16 00:44:51.109685 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:51.109700 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:51.109747 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.109765 | orchestrator | 2026-03-16 00:44:51.109785 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-16 00:44:51.109806 | orchestrator | Monday 16 March 2026 00:44:45 +0000 (0:00:00.141) 0:01:05.517 ********** 2026-03-16 00:44:51.109825 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.109872 | orchestrator | 2026-03-16 00:44:51.109884 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-16 00:44:51.109895 | orchestrator | Monday 16 March 2026 00:44:45 +0000 (0:00:00.110) 0:01:05.628 ********** 2026-03-16 00:44:51.109905 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.109916 | orchestrator | 2026-03-16 00:44:51.109927 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-16 00:44:51.109938 | orchestrator | Monday 16 March 2026 00:44:45 +0000 (0:00:00.125) 0:01:05.753 ********** 2026-03-16 00:44:51.109951 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.109963 | orchestrator | 2026-03-16 00:44:51.109975 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-16 00:44:51.109987 | orchestrator | Monday 16 March 2026 00:44:46 +0000 (0:00:00.129) 0:01:05.882 ********** 2026-03-16 00:44:51.110000 | orchestrator | ok: [testbed-node-5] => { 2026-03-16 00:44:51.110013 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-16 00:44:51.110087 | orchestrator | } 2026-03-16 00:44:51.110100 | orchestrator | 2026-03-16 00:44:51.110113 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-16 00:44:51.110126 | orchestrator | Monday 16 March 2026 00:44:46 +0000 (0:00:00.134) 0:01:06.017 ********** 2026-03-16 00:44:51.110138 | orchestrator | ok: [testbed-node-5] => { 2026-03-16 00:44:51.110150 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-16 00:44:51.110163 | orchestrator | } 2026-03-16 00:44:51.110176 | orchestrator | 2026-03-16 00:44:51.110187 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-16 00:44:51.110198 | orchestrator | Monday 16 March 2026 00:44:46 +0000 (0:00:00.128) 0:01:06.146 ********** 2026-03-16 00:44:51.110209 | orchestrator | ok: [testbed-node-5] => { 2026-03-16 00:44:51.110220 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-16 00:44:51.110231 | orchestrator | } 2026-03-16 00:44:51.110242 | orchestrator | 2026-03-16 00:44:51.110252 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-16 00:44:51.110263 | orchestrator | Monday 16 March 2026 00:44:46 +0000 (0:00:00.113) 0:01:06.260 ********** 2026-03-16 00:44:51.110274 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:44:51.110285 | orchestrator | 2026-03-16 00:44:51.110296 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-16 00:44:51.110307 | orchestrator | Monday 16 March 2026 00:44:47 +0000 (0:00:00.531) 0:01:06.791 ********** 2026-03-16 00:44:51.110318 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:44:51.110329 | orchestrator | 2026-03-16 00:44:51.110340 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-16 00:44:51.110351 | orchestrator | Monday 16 March 2026 00:44:47 +0000 (0:00:00.535) 0:01:07.327 ********** 2026-03-16 00:44:51.110361 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:44:51.110372 | orchestrator | 2026-03-16 00:44:51.110383 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-16 00:44:51.110394 | orchestrator | Monday 16 March 2026 00:44:48 +0000 (0:00:00.696) 0:01:08.024 ********** 2026-03-16 00:44:51.110405 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:44:51.110416 | orchestrator | 2026-03-16 00:44:51.110426 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-16 00:44:51.110437 | orchestrator | Monday 16 March 2026 00:44:48 +0000 (0:00:00.138) 0:01:08.162 ********** 2026-03-16 00:44:51.110448 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.110459 | orchestrator | 2026-03-16 00:44:51.110470 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-16 00:44:51.110490 | orchestrator | Monday 16 March 2026 00:44:48 +0000 (0:00:00.111) 0:01:08.274 ********** 2026-03-16 00:44:51.110502 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.110513 | orchestrator | 2026-03-16 00:44:51.110523 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-16 00:44:51.110534 | orchestrator | Monday 16 March 2026 00:44:48 +0000 (0:00:00.107) 0:01:08.381 ********** 2026-03-16 00:44:51.110545 | orchestrator | ok: [testbed-node-5] => { 2026-03-16 00:44:51.110557 | orchestrator |  "vgs_report": { 2026-03-16 00:44:51.110568 | orchestrator |  "vg": [] 2026-03-16 00:44:51.110599 | orchestrator |  } 2026-03-16 00:44:51.110611 | orchestrator | } 2026-03-16 00:44:51.110623 | orchestrator | 2026-03-16 00:44:51.110647 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-16 00:44:51.110659 | orchestrator | Monday 16 March 2026 00:44:48 +0000 (0:00:00.126) 0:01:08.508 ********** 2026-03-16 00:44:51.110680 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.110691 | orchestrator | 2026-03-16 00:44:51.110702 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-16 00:44:51.110713 | orchestrator | Monday 16 March 2026 00:44:48 +0000 (0:00:00.130) 0:01:08.638 ********** 2026-03-16 00:44:51.110724 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.110735 | orchestrator | 2026-03-16 00:44:51.110746 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-16 00:44:51.110756 | orchestrator | Monday 16 March 2026 00:44:49 +0000 (0:00:00.134) 0:01:08.773 ********** 2026-03-16 00:44:51.110767 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.110778 | orchestrator | 2026-03-16 00:44:51.110789 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-16 00:44:51.110800 | orchestrator | Monday 16 March 2026 00:44:49 +0000 (0:00:00.111) 0:01:08.884 ********** 2026-03-16 00:44:51.110811 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.110821 | orchestrator | 2026-03-16 00:44:51.110912 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-16 00:44:51.110927 | orchestrator | Monday 16 March 2026 00:44:49 +0000 (0:00:00.120) 0:01:09.005 ********** 2026-03-16 00:44:51.110938 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.110949 | orchestrator | 2026-03-16 00:44:51.110960 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-16 00:44:51.110971 | orchestrator | Monday 16 March 2026 00:44:49 +0000 (0:00:00.112) 0:01:09.117 ********** 2026-03-16 00:44:51.110981 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.110992 | orchestrator | 2026-03-16 00:44:51.111021 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-16 00:44:51.111033 | orchestrator | Monday 16 March 2026 00:44:49 +0000 (0:00:00.133) 0:01:09.251 ********** 2026-03-16 00:44:51.111043 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.111054 | orchestrator | 2026-03-16 00:44:51.111065 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-16 00:44:51.111076 | orchestrator | Monday 16 March 2026 00:44:49 +0000 (0:00:00.134) 0:01:09.385 ********** 2026-03-16 00:44:51.111086 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.111097 | orchestrator | 2026-03-16 00:44:51.111108 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-16 00:44:51.111118 | orchestrator | Monday 16 March 2026 00:44:49 +0000 (0:00:00.271) 0:01:09.657 ********** 2026-03-16 00:44:51.111129 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.111140 | orchestrator | 2026-03-16 00:44:51.111156 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-16 00:44:51.111167 | orchestrator | Monday 16 March 2026 00:44:50 +0000 (0:00:00.135) 0:01:09.793 ********** 2026-03-16 00:44:51.111178 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.111188 | orchestrator | 2026-03-16 00:44:51.111199 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-16 00:44:51.111210 | orchestrator | Monday 16 March 2026 00:44:50 +0000 (0:00:00.123) 0:01:09.917 ********** 2026-03-16 00:44:51.111229 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.111239 | orchestrator | 2026-03-16 00:44:51.111250 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-16 00:44:51.111261 | orchestrator | Monday 16 March 2026 00:44:50 +0000 (0:00:00.129) 0:01:10.046 ********** 2026-03-16 00:44:51.111272 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.111283 | orchestrator | 2026-03-16 00:44:51.111294 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-16 00:44:51.111304 | orchestrator | Monday 16 March 2026 00:44:50 +0000 (0:00:00.122) 0:01:10.168 ********** 2026-03-16 00:44:51.111315 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.111326 | orchestrator | 2026-03-16 00:44:51.111337 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-16 00:44:51.111348 | orchestrator | Monday 16 March 2026 00:44:50 +0000 (0:00:00.134) 0:01:10.302 ********** 2026-03-16 00:44:51.111359 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.111369 | orchestrator | 2026-03-16 00:44:51.111380 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-16 00:44:51.111391 | orchestrator | Monday 16 March 2026 00:44:50 +0000 (0:00:00.119) 0:01:10.422 ********** 2026-03-16 00:44:51.111402 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:51.111413 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:51.111429 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.111448 | orchestrator | 2026-03-16 00:44:51.111472 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-16 00:44:51.111502 | orchestrator | Monday 16 March 2026 00:44:50 +0000 (0:00:00.156) 0:01:10.579 ********** 2026-03-16 00:44:51.111521 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:51.111539 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:51.111557 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:51.111576 | orchestrator | 2026-03-16 00:44:51.111591 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-16 00:44:51.111609 | orchestrator | Monday 16 March 2026 00:44:50 +0000 (0:00:00.146) 0:01:10.726 ********** 2026-03-16 00:44:51.111643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:53.999973 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:54.000050 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:54.000058 | orchestrator | 2026-03-16 00:44:54.000064 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-16 00:44:54.000071 | orchestrator | Monday 16 March 2026 00:44:51 +0000 (0:00:00.139) 0:01:10.866 ********** 2026-03-16 00:44:54.000077 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:54.000082 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:54.000087 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:54.000092 | orchestrator | 2026-03-16 00:44:54.000097 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-16 00:44:54.000101 | orchestrator | Monday 16 March 2026 00:44:51 +0000 (0:00:00.142) 0:01:11.008 ********** 2026-03-16 00:44:54.000124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:54.000129 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:54.000134 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:54.000139 | orchestrator | 2026-03-16 00:44:54.000143 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-16 00:44:54.000148 | orchestrator | Monday 16 March 2026 00:44:51 +0000 (0:00:00.149) 0:01:11.158 ********** 2026-03-16 00:44:54.000153 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:54.000158 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:54.000173 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:54.000178 | orchestrator | 2026-03-16 00:44:54.000183 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-16 00:44:54.000188 | orchestrator | Monday 16 March 2026 00:44:51 +0000 (0:00:00.347) 0:01:11.506 ********** 2026-03-16 00:44:54.000193 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:54.000198 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:54.000203 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:54.000208 | orchestrator | 2026-03-16 00:44:54.000213 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-16 00:44:54.000218 | orchestrator | Monday 16 March 2026 00:44:51 +0000 (0:00:00.151) 0:01:11.657 ********** 2026-03-16 00:44:54.000222 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:54.000227 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:54.000232 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:54.000237 | orchestrator | 2026-03-16 00:44:54.000242 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-16 00:44:54.000246 | orchestrator | Monday 16 March 2026 00:44:52 +0000 (0:00:00.137) 0:01:11.794 ********** 2026-03-16 00:44:54.000251 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:44:54.000257 | orchestrator | 2026-03-16 00:44:54.000262 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-16 00:44:54.000267 | orchestrator | Monday 16 March 2026 00:44:52 +0000 (0:00:00.527) 0:01:12.322 ********** 2026-03-16 00:44:54.000272 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:44:54.000276 | orchestrator | 2026-03-16 00:44:54.000281 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-16 00:44:54.000286 | orchestrator | Monday 16 March 2026 00:44:53 +0000 (0:00:00.556) 0:01:12.878 ********** 2026-03-16 00:44:54.000291 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:44:54.000296 | orchestrator | 2026-03-16 00:44:54.000301 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-16 00:44:54.000309 | orchestrator | Monday 16 March 2026 00:44:53 +0000 (0:00:00.130) 0:01:13.008 ********** 2026-03-16 00:44:54.000317 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'vg_name': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'}) 2026-03-16 00:44:54.000331 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'vg_name': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'}) 2026-03-16 00:44:54.000349 | orchestrator | 2026-03-16 00:44:54.000356 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-16 00:44:54.000364 | orchestrator | Monday 16 March 2026 00:44:53 +0000 (0:00:00.141) 0:01:13.150 ********** 2026-03-16 00:44:54.000388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:54.000397 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:54.000405 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:54.000414 | orchestrator | 2026-03-16 00:44:54.000422 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-16 00:44:54.000430 | orchestrator | Monday 16 March 2026 00:44:53 +0000 (0:00:00.145) 0:01:13.296 ********** 2026-03-16 00:44:54.000439 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:54.000448 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:54.000457 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:54.000465 | orchestrator | 2026-03-16 00:44:54.000474 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-16 00:44:54.000482 | orchestrator | Monday 16 March 2026 00:44:53 +0000 (0:00:00.144) 0:01:13.440 ********** 2026-03-16 00:44:54.000491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'})  2026-03-16 00:44:54.000500 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'})  2026-03-16 00:44:54.000508 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:44:54.000516 | orchestrator | 2026-03-16 00:44:54.000525 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-16 00:44:54.000532 | orchestrator | Monday 16 March 2026 00:44:53 +0000 (0:00:00.147) 0:01:13.588 ********** 2026-03-16 00:44:54.000542 | orchestrator | ok: [testbed-node-5] => { 2026-03-16 00:44:54.000550 | orchestrator |  "lvm_report": { 2026-03-16 00:44:54.000559 | orchestrator |  "lv": [ 2026-03-16 00:44:54.000569 | orchestrator |  { 2026-03-16 00:44:54.000578 | orchestrator |  "lv_name": "osd-block-20eacd0a-f744-531e-8511-c5afb936ef86", 2026-03-16 00:44:54.000595 | orchestrator |  "vg_name": "ceph-20eacd0a-f744-531e-8511-c5afb936ef86" 2026-03-16 00:44:54.000601 | orchestrator |  }, 2026-03-16 00:44:54.000606 | orchestrator |  { 2026-03-16 00:44:54.000612 | orchestrator |  "lv_name": "osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07", 2026-03-16 00:44:54.000617 | orchestrator |  "vg_name": "ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07" 2026-03-16 00:44:54.000623 | orchestrator |  } 2026-03-16 00:44:54.000628 | orchestrator |  ], 2026-03-16 00:44:54.000634 | orchestrator |  "pv": [ 2026-03-16 00:44:54.000639 | orchestrator |  { 2026-03-16 00:44:54.000645 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-16 00:44:54.000653 | orchestrator |  "vg_name": "ceph-20eacd0a-f744-531e-8511-c5afb936ef86" 2026-03-16 00:44:54.000661 | orchestrator |  }, 2026-03-16 00:44:54.000674 | orchestrator |  { 2026-03-16 00:44:54.000684 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-16 00:44:54.000691 | orchestrator |  "vg_name": "ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07" 2026-03-16 00:44:54.000698 | orchestrator |  } 2026-03-16 00:44:54.000706 | orchestrator |  ] 2026-03-16 00:44:54.000713 | orchestrator |  } 2026-03-16 00:44:54.000721 | orchestrator | } 2026-03-16 00:44:54.000737 | orchestrator | 2026-03-16 00:44:54.000745 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:44:54.000752 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-16 00:44:54.000760 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-16 00:44:54.000768 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-16 00:44:54.000776 | orchestrator | 2026-03-16 00:44:54.000784 | orchestrator | 2026-03-16 00:44:54.000792 | orchestrator | 2026-03-16 00:44:54.000801 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:44:54.000808 | orchestrator | Monday 16 March 2026 00:44:53 +0000 (0:00:00.145) 0:01:13.733 ********** 2026-03-16 00:44:54.000816 | orchestrator | =============================================================================== 2026-03-16 00:44:54.000824 | orchestrator | Create block VGs -------------------------------------------------------- 6.04s 2026-03-16 00:44:54.000857 | orchestrator | Create block LVs -------------------------------------------------------- 4.27s 2026-03-16 00:44:54.000866 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.83s 2026-03-16 00:44:54.000873 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.76s 2026-03-16 00:44:54.000880 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.72s 2026-03-16 00:44:54.000888 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.64s 2026-03-16 00:44:54.000895 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.63s 2026-03-16 00:44:54.000903 | orchestrator | Add known partitions to the list of available block devices ------------- 1.61s 2026-03-16 00:44:54.000919 | orchestrator | Add known links to the list of available block devices ------------------ 1.42s 2026-03-16 00:44:54.385791 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2026-03-16 00:44:54.385934 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-03-16 00:44:54.385946 | orchestrator | Print LVM report data --------------------------------------------------- 0.90s 2026-03-16 00:44:54.385954 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.83s 2026-03-16 00:44:54.385961 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.81s 2026-03-16 00:44:54.385968 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.77s 2026-03-16 00:44:54.385976 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.76s 2026-03-16 00:44:54.385983 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-03-16 00:44:54.385990 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2026-03-16 00:44:54.385997 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-03-16 00:44:54.386005 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-03-16 00:45:06.586810 | orchestrator | 2026-03-16 00:45:06 | INFO  | Task 9767a05a-7e23-496f-a7f4-be144c118ffd (facts) was prepared for execution. 2026-03-16 00:45:06.586912 | orchestrator | 2026-03-16 00:45:06 | INFO  | It takes a moment until task 9767a05a-7e23-496f-a7f4-be144c118ffd (facts) has been started and output is visible here. 2026-03-16 00:45:19.074134 | orchestrator | 2026-03-16 00:45:19.074234 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-16 00:45:19.074248 | orchestrator | 2026-03-16 00:45:19.074258 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-16 00:45:19.074270 | orchestrator | Monday 16 March 2026 00:45:10 +0000 (0:00:00.236) 0:00:00.236 ********** 2026-03-16 00:45:19.074315 | orchestrator | ok: [testbed-manager] 2026-03-16 00:45:19.074332 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:45:19.074346 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:45:19.074359 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:45:19.074390 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:45:19.074414 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:45:19.074426 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:45:19.074440 | orchestrator | 2026-03-16 00:45:19.074454 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-16 00:45:19.074484 | orchestrator | Monday 16 March 2026 00:45:11 +0000 (0:00:00.968) 0:00:01.205 ********** 2026-03-16 00:45:19.074498 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:45:19.074511 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:45:19.074524 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:45:19.074537 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:45:19.074550 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:45:19.074562 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:45:19.074575 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:45:19.074589 | orchestrator | 2026-03-16 00:45:19.074602 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-16 00:45:19.074616 | orchestrator | 2026-03-16 00:45:19.074629 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-16 00:45:19.074641 | orchestrator | Monday 16 March 2026 00:45:12 +0000 (0:00:01.120) 0:00:02.326 ********** 2026-03-16 00:45:19.074654 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:45:19.074669 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:45:19.074683 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:45:19.074697 | orchestrator | ok: [testbed-manager] 2026-03-16 00:45:19.074711 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:45:19.074726 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:45:19.074740 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:45:19.074755 | orchestrator | 2026-03-16 00:45:19.074769 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-16 00:45:19.074784 | orchestrator | 2026-03-16 00:45:19.074798 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-16 00:45:19.074867 | orchestrator | Monday 16 March 2026 00:45:18 +0000 (0:00:05.743) 0:00:08.069 ********** 2026-03-16 00:45:19.074882 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:45:19.074897 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:45:19.074911 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:45:19.074925 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:45:19.074942 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:45:19.074957 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:45:19.074971 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:45:19.074986 | orchestrator | 2026-03-16 00:45:19.075001 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:45:19.075016 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:45:19.075033 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:45:19.075048 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:45:19.075062 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:45:19.075077 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:45:19.075092 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:45:19.075105 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:45:19.075136 | orchestrator | 2026-03-16 00:45:19.075151 | orchestrator | 2026-03-16 00:45:19.075165 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:45:19.075179 | orchestrator | Monday 16 March 2026 00:45:18 +0000 (0:00:00.469) 0:00:08.538 ********** 2026-03-16 00:45:19.075192 | orchestrator | =============================================================================== 2026-03-16 00:45:19.075206 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.74s 2026-03-16 00:45:19.075218 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.12s 2026-03-16 00:45:19.075231 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.97s 2026-03-16 00:45:19.075244 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2026-03-16 00:45:31.226427 | orchestrator | 2026-03-16 00:45:31 | INFO  | Task 58030d63-e23a-404c-bb2a-3d7ad2fc2e58 (frr) was prepared for execution. 2026-03-16 00:45:31.226538 | orchestrator | 2026-03-16 00:45:31 | INFO  | It takes a moment until task 58030d63-e23a-404c-bb2a-3d7ad2fc2e58 (frr) has been started and output is visible here. 2026-03-16 00:45:55.859159 | orchestrator | 2026-03-16 00:45:55.859252 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-16 00:45:55.859262 | orchestrator | 2026-03-16 00:45:55.859266 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-16 00:45:55.859271 | orchestrator | Monday 16 March 2026 00:45:35 +0000 (0:00:00.214) 0:00:00.214 ********** 2026-03-16 00:45:55.859276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-16 00:45:55.859281 | orchestrator | 2026-03-16 00:45:55.859286 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-16 00:45:55.859292 | orchestrator | Monday 16 March 2026 00:45:35 +0000 (0:00:00.206) 0:00:00.420 ********** 2026-03-16 00:45:55.859298 | orchestrator | changed: [testbed-manager] 2026-03-16 00:45:55.859308 | orchestrator | 2026-03-16 00:45:55.859316 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-16 00:45:55.859323 | orchestrator | Monday 16 March 2026 00:45:36 +0000 (0:00:01.103) 0:00:01.524 ********** 2026-03-16 00:45:55.859329 | orchestrator | changed: [testbed-manager] 2026-03-16 00:45:55.859335 | orchestrator | 2026-03-16 00:45:55.859341 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-16 00:45:55.859347 | orchestrator | Monday 16 March 2026 00:45:45 +0000 (0:00:08.654) 0:00:10.179 ********** 2026-03-16 00:45:55.859352 | orchestrator | ok: [testbed-manager] 2026-03-16 00:45:55.859359 | orchestrator | 2026-03-16 00:45:55.859365 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-16 00:45:55.859371 | orchestrator | Monday 16 March 2026 00:45:46 +0000 (0:00:01.012) 0:00:11.191 ********** 2026-03-16 00:45:55.859377 | orchestrator | changed: [testbed-manager] 2026-03-16 00:45:55.859383 | orchestrator | 2026-03-16 00:45:55.859389 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-16 00:45:55.859396 | orchestrator | Monday 16 March 2026 00:45:47 +0000 (0:00:01.021) 0:00:12.213 ********** 2026-03-16 00:45:55.859402 | orchestrator | ok: [testbed-manager] 2026-03-16 00:45:55.859408 | orchestrator | 2026-03-16 00:45:55.859414 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-16 00:45:55.859421 | orchestrator | Monday 16 March 2026 00:45:48 +0000 (0:00:01.225) 0:00:13.438 ********** 2026-03-16 00:45:55.859427 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:45:55.859434 | orchestrator | 2026-03-16 00:45:55.859440 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-16 00:45:55.859447 | orchestrator | Monday 16 March 2026 00:45:48 +0000 (0:00:00.134) 0:00:13.573 ********** 2026-03-16 00:45:55.859470 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:45:55.859496 | orchestrator | 2026-03-16 00:45:55.859503 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-16 00:45:55.859509 | orchestrator | Monday 16 March 2026 00:45:48 +0000 (0:00:00.154) 0:00:13.728 ********** 2026-03-16 00:45:55.859517 | orchestrator | changed: [testbed-manager] 2026-03-16 00:45:55.859523 | orchestrator | 2026-03-16 00:45:55.859530 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-16 00:45:55.859536 | orchestrator | Monday 16 March 2026 00:45:49 +0000 (0:00:01.009) 0:00:14.738 ********** 2026-03-16 00:45:55.859544 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-16 00:45:55.859551 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-16 00:45:55.859559 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-16 00:45:55.859564 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-16 00:45:55.859570 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-16 00:45:55.859577 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-16 00:45:55.859583 | orchestrator | 2026-03-16 00:45:55.859589 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-16 00:45:55.859595 | orchestrator | Monday 16 March 2026 00:45:52 +0000 (0:00:02.328) 0:00:17.066 ********** 2026-03-16 00:45:55.859600 | orchestrator | ok: [testbed-manager] 2026-03-16 00:45:55.859606 | orchestrator | 2026-03-16 00:45:55.859612 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-16 00:45:55.859618 | orchestrator | Monday 16 March 2026 00:45:53 +0000 (0:00:01.799) 0:00:18.866 ********** 2026-03-16 00:45:55.859625 | orchestrator | changed: [testbed-manager] 2026-03-16 00:45:55.859629 | orchestrator | 2026-03-16 00:45:55.859633 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:45:55.859637 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:45:55.859644 | orchestrator | 2026-03-16 00:45:55.859650 | orchestrator | 2026-03-16 00:45:55.859657 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:45:55.859664 | orchestrator | Monday 16 March 2026 00:45:55 +0000 (0:00:01.560) 0:00:20.427 ********** 2026-03-16 00:45:55.859671 | orchestrator | =============================================================================== 2026-03-16 00:45:55.859677 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.65s 2026-03-16 00:45:55.859684 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.33s 2026-03-16 00:45:55.859690 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.80s 2026-03-16 00:45:55.859695 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.56s 2026-03-16 00:45:55.859698 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.23s 2026-03-16 00:45:55.859717 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.10s 2026-03-16 00:45:55.859721 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.02s 2026-03-16 00:45:55.859725 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.01s 2026-03-16 00:45:55.859729 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.01s 2026-03-16 00:45:55.859732 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2026-03-16 00:45:55.859736 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-03-16 00:45:55.859740 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-03-16 00:45:56.288817 | orchestrator | 2026-03-16 00:45:56.290819 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Mar 16 00:45:56 UTC 2026 2026-03-16 00:45:56.290874 | orchestrator | 2026-03-16 00:45:58.321016 | orchestrator | 2026-03-16 00:45:58 | INFO  | Collection nutshell is prepared for execution 2026-03-16 00:45:58.321097 | orchestrator | 2026-03-16 00:45:58 | INFO  | A [0] - dotfiles 2026-03-16 00:46:08.335039 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [0] - homer 2026-03-16 00:46:08.335144 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [0] - netdata 2026-03-16 00:46:08.335160 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [0] - openstackclient 2026-03-16 00:46:08.335172 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [0] - phpmyadmin 2026-03-16 00:46:08.335183 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [0] - common 2026-03-16 00:46:08.339649 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [1] -- loadbalancer 2026-03-16 00:46:08.340076 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [2] --- opensearch 2026-03-16 00:46:08.340148 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [2] --- mariadb-ng 2026-03-16 00:46:08.340171 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [3] ---- horizon 2026-03-16 00:46:08.340440 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [3] ---- keystone 2026-03-16 00:46:08.340512 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [4] ----- neutron 2026-03-16 00:46:08.340818 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [5] ------ wait-for-nova 2026-03-16 00:46:08.340948 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [6] ------- octavia 2026-03-16 00:46:08.342471 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [4] ----- barbican 2026-03-16 00:46:08.342609 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [4] ----- designate 2026-03-16 00:46:08.342630 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [4] ----- ironic 2026-03-16 00:46:08.342879 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [4] ----- placement 2026-03-16 00:46:08.342901 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [4] ----- magnum 2026-03-16 00:46:08.343711 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [1] -- openvswitch 2026-03-16 00:46:08.343912 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [2] --- ovn 2026-03-16 00:46:08.344136 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [1] -- memcached 2026-03-16 00:46:08.344382 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [1] -- redis 2026-03-16 00:46:08.344541 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [1] -- rabbitmq-ng 2026-03-16 00:46:08.344774 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [0] - kubernetes 2026-03-16 00:46:08.347364 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [1] -- kubeconfig 2026-03-16 00:46:08.347415 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [1] -- copy-kubeconfig 2026-03-16 00:46:08.347718 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [0] - ceph 2026-03-16 00:46:08.349998 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [1] -- ceph-pools 2026-03-16 00:46:08.350284 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [2] --- copy-ceph-keys 2026-03-16 00:46:08.350449 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [3] ---- cephclient 2026-03-16 00:46:08.350474 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-16 00:46:08.350595 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [4] ----- wait-for-keystone 2026-03-16 00:46:08.350622 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-16 00:46:08.350640 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [5] ------ glance 2026-03-16 00:46:08.350658 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [5] ------ cinder 2026-03-16 00:46:08.350714 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [5] ------ nova 2026-03-16 00:46:08.351110 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [4] ----- prometheus 2026-03-16 00:46:08.351152 | orchestrator | 2026-03-16 00:46:08 | INFO  | A [5] ------ grafana 2026-03-16 00:46:08.542917 | orchestrator | 2026-03-16 00:46:08 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-16 00:46:08.543034 | orchestrator | 2026-03-16 00:46:08 | INFO  | Tasks are running in the background 2026-03-16 00:46:11.360835 | orchestrator | 2026-03-16 00:46:11 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-16 00:46:13.452415 | orchestrator | 2026-03-16 00:46:13 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:13.455318 | orchestrator | 2026-03-16 00:46:13 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:13.455570 | orchestrator | 2026-03-16 00:46:13 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:13.456112 | orchestrator | 2026-03-16 00:46:13 | INFO  | Task c3a7c8c2-a988-4e2c-af9b-cb44e64f61c4 is in state STARTED 2026-03-16 00:46:13.457256 | orchestrator | 2026-03-16 00:46:13 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:13.457416 | orchestrator | 2026-03-16 00:46:13 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:13.458097 | orchestrator | 2026-03-16 00:46:13 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:13.458148 | orchestrator | 2026-03-16 00:46:13 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:16.505435 | orchestrator | 2026-03-16 00:46:16 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:16.505665 | orchestrator | 2026-03-16 00:46:16 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:16.508114 | orchestrator | 2026-03-16 00:46:16 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:16.508363 | orchestrator | 2026-03-16 00:46:16 | INFO  | Task c3a7c8c2-a988-4e2c-af9b-cb44e64f61c4 is in state STARTED 2026-03-16 00:46:16.508829 | orchestrator | 2026-03-16 00:46:16 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:16.509343 | orchestrator | 2026-03-16 00:46:16 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:16.509930 | orchestrator | 2026-03-16 00:46:16 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:16.509965 | orchestrator | 2026-03-16 00:46:16 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:19.537961 | orchestrator | 2026-03-16 00:46:19 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:19.538299 | orchestrator | 2026-03-16 00:46:19 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:19.539259 | orchestrator | 2026-03-16 00:46:19 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:19.539907 | orchestrator | 2026-03-16 00:46:19 | INFO  | Task c3a7c8c2-a988-4e2c-af9b-cb44e64f61c4 is in state STARTED 2026-03-16 00:46:19.540410 | orchestrator | 2026-03-16 00:46:19 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:19.541121 | orchestrator | 2026-03-16 00:46:19 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:19.541884 | orchestrator | 2026-03-16 00:46:19 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:19.541937 | orchestrator | 2026-03-16 00:46:19 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:22.584817 | orchestrator | 2026-03-16 00:46:22 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:22.584902 | orchestrator | 2026-03-16 00:46:22 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:22.584913 | orchestrator | 2026-03-16 00:46:22 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:22.584920 | orchestrator | 2026-03-16 00:46:22 | INFO  | Task c3a7c8c2-a988-4e2c-af9b-cb44e64f61c4 is in state STARTED 2026-03-16 00:46:22.584926 | orchestrator | 2026-03-16 00:46:22 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:22.584933 | orchestrator | 2026-03-16 00:46:22 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:22.584940 | orchestrator | 2026-03-16 00:46:22 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:22.584946 | orchestrator | 2026-03-16 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:25.742918 | orchestrator | 2026-03-16 00:46:25 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:25.743004 | orchestrator | 2026-03-16 00:46:25 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:25.745854 | orchestrator | 2026-03-16 00:46:25 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:25.746785 | orchestrator | 2026-03-16 00:46:25 | INFO  | Task c3a7c8c2-a988-4e2c-af9b-cb44e64f61c4 is in state STARTED 2026-03-16 00:46:25.747404 | orchestrator | 2026-03-16 00:46:25 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:25.750319 | orchestrator | 2026-03-16 00:46:25 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:25.751967 | orchestrator | 2026-03-16 00:46:25 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:25.752014 | orchestrator | 2026-03-16 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:28.855933 | orchestrator | 2026-03-16 00:46:28 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:28.856038 | orchestrator | 2026-03-16 00:46:28 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:28.856047 | orchestrator | 2026-03-16 00:46:28 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:28.856051 | orchestrator | 2026-03-16 00:46:28 | INFO  | Task c3a7c8c2-a988-4e2c-af9b-cb44e64f61c4 is in state STARTED 2026-03-16 00:46:28.856055 | orchestrator | 2026-03-16 00:46:28 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:28.856059 | orchestrator | 2026-03-16 00:46:28 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:28.857100 | orchestrator | 2026-03-16 00:46:28 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:28.857156 | orchestrator | 2026-03-16 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:31.977811 | orchestrator | 2026-03-16 00:46:31 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:31.977903 | orchestrator | 2026-03-16 00:46:31 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:31.977915 | orchestrator | 2026-03-16 00:46:31 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:31.977947 | orchestrator | 2026-03-16 00:46:31 | INFO  | Task c3a7c8c2-a988-4e2c-af9b-cb44e64f61c4 is in state STARTED 2026-03-16 00:46:31.977954 | orchestrator | 2026-03-16 00:46:31 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:31.977960 | orchestrator | 2026-03-16 00:46:31 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:31.977969 | orchestrator | 2026-03-16 00:46:31 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:31.977976 | orchestrator | 2026-03-16 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:35.023686 | orchestrator | 2026-03-16 00:46:35 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:35.023844 | orchestrator | 2026-03-16 00:46:35 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:35.025758 | orchestrator | 2026-03-16 00:46:35 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:35.025808 | orchestrator | 2026-03-16 00:46:35 | INFO  | Task c3a7c8c2-a988-4e2c-af9b-cb44e64f61c4 is in state STARTED 2026-03-16 00:46:35.027530 | orchestrator | 2026-03-16 00:46:35 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:35.028923 | orchestrator | 2026-03-16 00:46:35 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:35.031236 | orchestrator | 2026-03-16 00:46:35 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:35.031271 | orchestrator | 2026-03-16 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:38.365427 | orchestrator | 2026-03-16 00:46:38.365528 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-16 00:46:38.365544 | orchestrator | 2026-03-16 00:46:38.365556 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-16 00:46:38.365568 | orchestrator | Monday 16 March 2026 00:46:20 +0000 (0:00:00.948) 0:00:00.948 ********** 2026-03-16 00:46:38.365579 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:46:38.365591 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:46:38.365602 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:46:38.365613 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:46:38.365623 | orchestrator | changed: [testbed-manager] 2026-03-16 00:46:38.365634 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:46:38.365645 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:46:38.365655 | orchestrator | 2026-03-16 00:46:38.365667 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-16 00:46:38.365678 | orchestrator | Monday 16 March 2026 00:46:25 +0000 (0:00:04.801) 0:00:05.750 ********** 2026-03-16 00:46:38.365689 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-16 00:46:38.365701 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-16 00:46:38.365713 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-16 00:46:38.365758 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-16 00:46:38.365777 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-16 00:46:38.365795 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-16 00:46:38.365813 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-16 00:46:38.365831 | orchestrator | 2026-03-16 00:46:38.365850 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-16 00:46:38.365870 | orchestrator | Monday 16 March 2026 00:46:27 +0000 (0:00:02.262) 0:00:08.012 ********** 2026-03-16 00:46:38.365906 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-16 00:46:26.419642', 'end': '2026-03-16 00:46:26.426510', 'delta': '0:00:00.006868', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-16 00:46:38.365962 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-16 00:46:26.568484', 'end': '2026-03-16 00:46:26.575782', 'delta': '0:00:00.007298', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-16 00:46:38.365985 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-16 00:46:26.372039', 'end': '2026-03-16 00:46:26.378983', 'delta': '0:00:00.006944', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-16 00:46:38.366095 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-16 00:46:26.584667', 'end': '2026-03-16 00:46:26.591732', 'delta': '0:00:00.007065', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-16 00:46:38.366113 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-16 00:46:26.684123', 'end': '2026-03-16 00:46:26.690707', 'delta': '0:00:00.006584', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-16 00:46:38.366125 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-16 00:46:26.892877', 'end': '2026-03-16 00:46:26.900813', 'delta': '0:00:00.007936', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-16 00:46:38.366438 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-16 00:46:27.227148', 'end': '2026-03-16 00:46:27.232962', 'delta': '0:00:00.005814', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-16 00:46:38.366453 | orchestrator | 2026-03-16 00:46:38.366465 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-16 00:46:38.366477 | orchestrator | Monday 16 March 2026 00:46:29 +0000 (0:00:02.174) 0:00:10.187 ********** 2026-03-16 00:46:38.366488 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-16 00:46:38.366499 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-16 00:46:38.366510 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-16 00:46:38.366521 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-16 00:46:38.366531 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-16 00:46:38.366542 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-16 00:46:38.366553 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-16 00:46:38.366564 | orchestrator | 2026-03-16 00:46:38.366575 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-16 00:46:38.366586 | orchestrator | Monday 16 March 2026 00:46:32 +0000 (0:00:02.799) 0:00:12.986 ********** 2026-03-16 00:46:38.366597 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-16 00:46:38.366608 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-16 00:46:38.366620 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-16 00:46:38.366630 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-16 00:46:38.366642 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-16 00:46:38.366652 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-16 00:46:38.366663 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-16 00:46:38.366674 | orchestrator | 2026-03-16 00:46:38.366685 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:46:38.366707 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:46:38.366742 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:46:38.366754 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:46:38.366765 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:46:38.366784 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:46:38.366795 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:46:38.366811 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:46:38.366822 | orchestrator | 2026-03-16 00:46:38.366833 | orchestrator | 2026-03-16 00:46:38.366845 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:46:38.366855 | orchestrator | Monday 16 March 2026 00:46:35 +0000 (0:00:03.392) 0:00:16.379 ********** 2026-03-16 00:46:38.366866 | orchestrator | =============================================================================== 2026-03-16 00:46:38.366877 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.80s 2026-03-16 00:46:38.366888 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.39s 2026-03-16 00:46:38.366899 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.80s 2026-03-16 00:46:38.366910 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.26s 2026-03-16 00:46:38.366921 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.17s 2026-03-16 00:46:38.366932 | orchestrator | 2026-03-16 00:46:38 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:38.366943 | orchestrator | 2026-03-16 00:46:38 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:38.366954 | orchestrator | 2026-03-16 00:46:38 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:38.366965 | orchestrator | 2026-03-16 00:46:38 | INFO  | Task c3a7c8c2-a988-4e2c-af9b-cb44e64f61c4 is in state SUCCESS 2026-03-16 00:46:38.366975 | orchestrator | 2026-03-16 00:46:38 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:38.366986 | orchestrator | 2026-03-16 00:46:38 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:46:38.366997 | orchestrator | 2026-03-16 00:46:38 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:38.367008 | orchestrator | 2026-03-16 00:46:38 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:38.367019 | orchestrator | 2026-03-16 00:46:38 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:41.424154 | orchestrator | 2026-03-16 00:46:41 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:41.426577 | orchestrator | 2026-03-16 00:46:41 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:41.432514 | orchestrator | 2026-03-16 00:46:41 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:41.437412 | orchestrator | 2026-03-16 00:46:41 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:41.438846 | orchestrator | 2026-03-16 00:46:41 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:46:41.442830 | orchestrator | 2026-03-16 00:46:41 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:41.445002 | orchestrator | 2026-03-16 00:46:41 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:41.445095 | orchestrator | 2026-03-16 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:44.547253 | orchestrator | 2026-03-16 00:46:44 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:44.547373 | orchestrator | 2026-03-16 00:46:44 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:44.547385 | orchestrator | 2026-03-16 00:46:44 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:44.547398 | orchestrator | 2026-03-16 00:46:44 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:44.547405 | orchestrator | 2026-03-16 00:46:44 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:46:44.547412 | orchestrator | 2026-03-16 00:46:44 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:44.547419 | orchestrator | 2026-03-16 00:46:44 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:44.547426 | orchestrator | 2026-03-16 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:47.672593 | orchestrator | 2026-03-16 00:46:47 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:47.672976 | orchestrator | 2026-03-16 00:46:47 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:47.673974 | orchestrator | 2026-03-16 00:46:47 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:47.674581 | orchestrator | 2026-03-16 00:46:47 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:47.675284 | orchestrator | 2026-03-16 00:46:47 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:46:47.676005 | orchestrator | 2026-03-16 00:46:47 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:47.676686 | orchestrator | 2026-03-16 00:46:47 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:47.676745 | orchestrator | 2026-03-16 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:50.733501 | orchestrator | 2026-03-16 00:46:50 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:50.735277 | orchestrator | 2026-03-16 00:46:50 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:50.735320 | orchestrator | 2026-03-16 00:46:50 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:50.735329 | orchestrator | 2026-03-16 00:46:50 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:50.736273 | orchestrator | 2026-03-16 00:46:50 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:46:50.737547 | orchestrator | 2026-03-16 00:46:50 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:50.738174 | orchestrator | 2026-03-16 00:46:50 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:50.738725 | orchestrator | 2026-03-16 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:53.892968 | orchestrator | 2026-03-16 00:46:53 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:53.893522 | orchestrator | 2026-03-16 00:46:53 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:53.895904 | orchestrator | 2026-03-16 00:46:53 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:53.896362 | orchestrator | 2026-03-16 00:46:53 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:53.898046 | orchestrator | 2026-03-16 00:46:53 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:46:53.898362 | orchestrator | 2026-03-16 00:46:53 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:53.900337 | orchestrator | 2026-03-16 00:46:53 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:53.900388 | orchestrator | 2026-03-16 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:46:56.946503 | orchestrator | 2026-03-16 00:46:56 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:46:56.947019 | orchestrator | 2026-03-16 00:46:56 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:46:56.948173 | orchestrator | 2026-03-16 00:46:56 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:46:56.948758 | orchestrator | 2026-03-16 00:46:56 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:46:56.950473 | orchestrator | 2026-03-16 00:46:56 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:46:56.952459 | orchestrator | 2026-03-16 00:46:56 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:46:56.961918 | orchestrator | 2026-03-16 00:46:56 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state STARTED 2026-03-16 00:46:56.962009 | orchestrator | 2026-03-16 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:00.013276 | orchestrator | 2026-03-16 00:47:00 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:00.014353 | orchestrator | 2026-03-16 00:47:00 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:00.022647 | orchestrator | 2026-03-16 00:47:00 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:00.022759 | orchestrator | 2026-03-16 00:47:00 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:00.023379 | orchestrator | 2026-03-16 00:47:00 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:00.024329 | orchestrator | 2026-03-16 00:47:00 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:47:00.024540 | orchestrator | 2026-03-16 00:47:00 | INFO  | Task 7edba792-48f9-421e-83c9-6908d78b0349 is in state SUCCESS 2026-03-16 00:47:00.024562 | orchestrator | 2026-03-16 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:03.072014 | orchestrator | 2026-03-16 00:47:03 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:03.072433 | orchestrator | 2026-03-16 00:47:03 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:03.074217 | orchestrator | 2026-03-16 00:47:03 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:03.074745 | orchestrator | 2026-03-16 00:47:03 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:03.075448 | orchestrator | 2026-03-16 00:47:03 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:03.076127 | orchestrator | 2026-03-16 00:47:03 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:47:03.076156 | orchestrator | 2026-03-16 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:06.107830 | orchestrator | 2026-03-16 00:47:06 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:06.108754 | orchestrator | 2026-03-16 00:47:06 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:06.110167 | orchestrator | 2026-03-16 00:47:06 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:06.111269 | orchestrator | 2026-03-16 00:47:06 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:06.112967 | orchestrator | 2026-03-16 00:47:06 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:06.114221 | orchestrator | 2026-03-16 00:47:06 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:47:06.114263 | orchestrator | 2026-03-16 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:09.155141 | orchestrator | 2026-03-16 00:47:09 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:09.155239 | orchestrator | 2026-03-16 00:47:09 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:09.158802 | orchestrator | 2026-03-16 00:47:09 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:09.158870 | orchestrator | 2026-03-16 00:47:09 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:09.158883 | orchestrator | 2026-03-16 00:47:09 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:09.159162 | orchestrator | 2026-03-16 00:47:09 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state STARTED 2026-03-16 00:47:09.159380 | orchestrator | 2026-03-16 00:47:09 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:12.200555 | orchestrator | 2026-03-16 00:47:12 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:12.202543 | orchestrator | 2026-03-16 00:47:12 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:12.204465 | orchestrator | 2026-03-16 00:47:12 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:12.205904 | orchestrator | 2026-03-16 00:47:12 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:12.209417 | orchestrator | 2026-03-16 00:47:12 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:12.209586 | orchestrator | 2026-03-16 00:47:12 | INFO  | Task 8278370d-e96f-487a-81c2-e60484307db0 is in state SUCCESS 2026-03-16 00:47:12.211065 | orchestrator | 2026-03-16 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:15.245331 | orchestrator | 2026-03-16 00:47:15 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:15.245422 | orchestrator | 2026-03-16 00:47:15 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:15.248795 | orchestrator | 2026-03-16 00:47:15 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:15.249044 | orchestrator | 2026-03-16 00:47:15 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:15.249806 | orchestrator | 2026-03-16 00:47:15 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:15.249869 | orchestrator | 2026-03-16 00:47:15 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:18.308752 | orchestrator | 2026-03-16 00:47:18 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:18.311232 | orchestrator | 2026-03-16 00:47:18 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:18.312793 | orchestrator | 2026-03-16 00:47:18 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:18.313846 | orchestrator | 2026-03-16 00:47:18 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:18.315537 | orchestrator | 2026-03-16 00:47:18 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:18.319513 | orchestrator | 2026-03-16 00:47:18 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:21.414810 | orchestrator | 2026-03-16 00:47:21 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:21.414915 | orchestrator | 2026-03-16 00:47:21 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:21.416142 | orchestrator | 2026-03-16 00:47:21 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:21.416742 | orchestrator | 2026-03-16 00:47:21 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:21.418521 | orchestrator | 2026-03-16 00:47:21 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:21.418565 | orchestrator | 2026-03-16 00:47:21 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:24.471422 | orchestrator | 2026-03-16 00:47:24 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:24.472046 | orchestrator | 2026-03-16 00:47:24 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:24.473443 | orchestrator | 2026-03-16 00:47:24 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:24.474150 | orchestrator | 2026-03-16 00:47:24 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:24.476606 | orchestrator | 2026-03-16 00:47:24 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:24.476638 | orchestrator | 2026-03-16 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:27.542138 | orchestrator | 2026-03-16 00:47:27 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:27.544992 | orchestrator | 2026-03-16 00:47:27 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:27.546253 | orchestrator | 2026-03-16 00:47:27 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:27.553430 | orchestrator | 2026-03-16 00:47:27 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:27.557653 | orchestrator | 2026-03-16 00:47:27 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:27.557705 | orchestrator | 2026-03-16 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:30.669730 | orchestrator | 2026-03-16 00:47:30 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:30.669787 | orchestrator | 2026-03-16 00:47:30 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:30.672548 | orchestrator | 2026-03-16 00:47:30 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:30.672596 | orchestrator | 2026-03-16 00:47:30 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:30.674153 | orchestrator | 2026-03-16 00:47:30 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:30.674199 | orchestrator | 2026-03-16 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:33.751139 | orchestrator | 2026-03-16 00:47:33 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:33.824501 | orchestrator | 2026-03-16 00:47:33 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:33.863592 | orchestrator | 2026-03-16 00:47:33 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:33.867321 | orchestrator | 2026-03-16 00:47:33 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:33.872635 | orchestrator | 2026-03-16 00:47:33 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:33.872706 | orchestrator | 2026-03-16 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:36.905318 | orchestrator | 2026-03-16 00:47:36 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:36.910371 | orchestrator | 2026-03-16 00:47:36 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:36.910451 | orchestrator | 2026-03-16 00:47:36 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:36.910459 | orchestrator | 2026-03-16 00:47:36 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:36.912267 | orchestrator | 2026-03-16 00:47:36 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:36.912311 | orchestrator | 2026-03-16 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:39.935341 | orchestrator | 2026-03-16 00:47:39 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:39.936360 | orchestrator | 2026-03-16 00:47:39 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:39.936493 | orchestrator | 2026-03-16 00:47:39 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:39.938003 | orchestrator | 2026-03-16 00:47:39 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:39.938767 | orchestrator | 2026-03-16 00:47:39 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:39.938806 | orchestrator | 2026-03-16 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:42.978919 | orchestrator | 2026-03-16 00:47:42 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:42.981460 | orchestrator | 2026-03-16 00:47:42 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:42.982192 | orchestrator | 2026-03-16 00:47:42 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:42.983154 | orchestrator | 2026-03-16 00:47:42 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:42.984050 | orchestrator | 2026-03-16 00:47:42 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:42.984087 | orchestrator | 2026-03-16 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:46.013218 | orchestrator | 2026-03-16 00:47:46 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:46.013292 | orchestrator | 2026-03-16 00:47:46 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:46.013919 | orchestrator | 2026-03-16 00:47:46 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:46.014584 | orchestrator | 2026-03-16 00:47:46 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:46.015320 | orchestrator | 2026-03-16 00:47:46 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:46.015371 | orchestrator | 2026-03-16 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:49.046889 | orchestrator | 2026-03-16 00:47:49 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:49.049883 | orchestrator | 2026-03-16 00:47:49 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:49.051882 | orchestrator | 2026-03-16 00:47:49 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:49.055584 | orchestrator | 2026-03-16 00:47:49 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:49.057165 | orchestrator | 2026-03-16 00:47:49 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:49.057365 | orchestrator | 2026-03-16 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:52.092570 | orchestrator | 2026-03-16 00:47:52 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:52.097588 | orchestrator | 2026-03-16 00:47:52 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state STARTED 2026-03-16 00:47:52.103307 | orchestrator | 2026-03-16 00:47:52 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:52.125536 | orchestrator | 2026-03-16 00:47:52 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:52.125583 | orchestrator | 2026-03-16 00:47:52 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:52.125589 | orchestrator | 2026-03-16 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:55.170151 | orchestrator | 2026-03-16 00:47:55 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:55.173464 | orchestrator | 2026-03-16 00:47:55 | INFO  | Task dc1d723f-9b11-4d5d-86fd-5697e584fd35 is in state SUCCESS 2026-03-16 00:47:55.174488 | orchestrator | 2026-03-16 00:47:55.174532 | orchestrator | 2026-03-16 00:47:55.174537 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-16 00:47:55.174541 | orchestrator | 2026-03-16 00:47:55.174545 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-16 00:47:55.174549 | orchestrator | Monday 16 March 2026 00:46:21 +0000 (0:00:00.949) 0:00:00.949 ********** 2026-03-16 00:47:55.174552 | orchestrator | ok: [testbed-manager] => { 2026-03-16 00:47:55.174556 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-16 00:47:55.174560 | orchestrator | } 2026-03-16 00:47:55.174566 | orchestrator | 2026-03-16 00:47:55.174571 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-16 00:47:55.174576 | orchestrator | Monday 16 March 2026 00:46:21 +0000 (0:00:00.445) 0:00:01.395 ********** 2026-03-16 00:47:55.174582 | orchestrator | ok: [testbed-manager] 2026-03-16 00:47:55.174588 | orchestrator | 2026-03-16 00:47:55.174592 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-16 00:47:55.174595 | orchestrator | Monday 16 March 2026 00:46:24 +0000 (0:00:02.350) 0:00:03.745 ********** 2026-03-16 00:47:55.174604 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-16 00:47:55.174610 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-16 00:47:55.174621 | orchestrator | 2026-03-16 00:47:55.174626 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-16 00:47:55.174688 | orchestrator | Monday 16 March 2026 00:46:25 +0000 (0:00:01.326) 0:00:05.071 ********** 2026-03-16 00:47:55.174692 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.174696 | orchestrator | 2026-03-16 00:47:55.174699 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-16 00:47:55.174712 | orchestrator | Monday 16 March 2026 00:46:28 +0000 (0:00:03.575) 0:00:08.647 ********** 2026-03-16 00:47:55.174715 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.174718 | orchestrator | 2026-03-16 00:47:55.174722 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-16 00:47:55.174725 | orchestrator | Monday 16 March 2026 00:46:30 +0000 (0:00:01.436) 0:00:10.083 ********** 2026-03-16 00:47:55.174728 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-16 00:47:55.174731 | orchestrator | ok: [testbed-manager] 2026-03-16 00:47:55.174734 | orchestrator | 2026-03-16 00:47:55.174737 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-16 00:47:55.174741 | orchestrator | Monday 16 March 2026 00:46:56 +0000 (0:00:25.801) 0:00:35.884 ********** 2026-03-16 00:47:55.174744 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.174747 | orchestrator | 2026-03-16 00:47:55.174750 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:47:55.174753 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:47:55.174757 | orchestrator | 2026-03-16 00:47:55.174760 | orchestrator | 2026-03-16 00:47:55.174763 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:47:55.174766 | orchestrator | Monday 16 March 2026 00:46:57 +0000 (0:00:01.644) 0:00:37.529 ********** 2026-03-16 00:47:55.174770 | orchestrator | =============================================================================== 2026-03-16 00:47:55.174773 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.80s 2026-03-16 00:47:55.174776 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.58s 2026-03-16 00:47:55.174779 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.35s 2026-03-16 00:47:55.174782 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.64s 2026-03-16 00:47:55.174785 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.44s 2026-03-16 00:47:55.174788 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.33s 2026-03-16 00:47:55.174791 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.45s 2026-03-16 00:47:55.174794 | orchestrator | 2026-03-16 00:47:55.174797 | orchestrator | 2026-03-16 00:47:55.174800 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-16 00:47:55.174808 | orchestrator | 2026-03-16 00:47:55.174812 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-16 00:47:55.174819 | orchestrator | Monday 16 March 2026 00:46:22 +0000 (0:00:00.563) 0:00:00.563 ********** 2026-03-16 00:47:55.174823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-16 00:47:55.174827 | orchestrator | 2026-03-16 00:47:55.174830 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-16 00:47:55.174833 | orchestrator | Monday 16 March 2026 00:46:23 +0000 (0:00:00.739) 0:00:01.302 ********** 2026-03-16 00:47:55.174840 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-16 00:47:55.174843 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-16 00:47:55.174846 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-16 00:47:55.174856 | orchestrator | 2026-03-16 00:47:55.174861 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-16 00:47:55.174866 | orchestrator | Monday 16 March 2026 00:46:25 +0000 (0:00:02.194) 0:00:03.497 ********** 2026-03-16 00:47:55.174872 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.174880 | orchestrator | 2026-03-16 00:47:55.174884 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-16 00:47:55.174889 | orchestrator | Monday 16 March 2026 00:46:28 +0000 (0:00:02.796) 0:00:06.294 ********** 2026-03-16 00:47:55.174906 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-16 00:47:55.174912 | orchestrator | ok: [testbed-manager] 2026-03-16 00:47:55.174918 | orchestrator | 2026-03-16 00:47:55.174923 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-16 00:47:55.174928 | orchestrator | Monday 16 March 2026 00:47:03 +0000 (0:00:35.122) 0:00:41.416 ********** 2026-03-16 00:47:55.174932 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.174938 | orchestrator | 2026-03-16 00:47:55.174943 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-16 00:47:55.174947 | orchestrator | Monday 16 March 2026 00:47:05 +0000 (0:00:01.827) 0:00:43.244 ********** 2026-03-16 00:47:55.174952 | orchestrator | ok: [testbed-manager] 2026-03-16 00:47:55.174957 | orchestrator | 2026-03-16 00:47:55.174962 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-16 00:47:55.174967 | orchestrator | Monday 16 March 2026 00:47:06 +0000 (0:00:00.843) 0:00:44.087 ********** 2026-03-16 00:47:55.174972 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.174977 | orchestrator | 2026-03-16 00:47:55.174982 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-16 00:47:55.174988 | orchestrator | Monday 16 March 2026 00:47:08 +0000 (0:00:02.204) 0:00:46.291 ********** 2026-03-16 00:47:55.174993 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.174999 | orchestrator | 2026-03-16 00:47:55.175011 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-16 00:47:55.175014 | orchestrator | Monday 16 March 2026 00:47:09 +0000 (0:00:00.936) 0:00:47.228 ********** 2026-03-16 00:47:55.175018 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.175023 | orchestrator | 2026-03-16 00:47:55.175031 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-16 00:47:55.175037 | orchestrator | Monday 16 March 2026 00:47:09 +0000 (0:00:00.560) 0:00:47.788 ********** 2026-03-16 00:47:55.175042 | orchestrator | ok: [testbed-manager] 2026-03-16 00:47:55.175046 | orchestrator | 2026-03-16 00:47:55.175052 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:47:55.175057 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:47:55.175068 | orchestrator | 2026-03-16 00:47:55.175072 | orchestrator | 2026-03-16 00:47:55.175075 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:47:55.175079 | orchestrator | Monday 16 March 2026 00:47:10 +0000 (0:00:00.615) 0:00:48.404 ********** 2026-03-16 00:47:55.175082 | orchestrator | =============================================================================== 2026-03-16 00:47:55.175086 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.12s 2026-03-16 00:47:55.175090 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.80s 2026-03-16 00:47:55.175095 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.20s 2026-03-16 00:47:55.175100 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.20s 2026-03-16 00:47:55.175105 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.83s 2026-03-16 00:47:55.175110 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.94s 2026-03-16 00:47:55.175116 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.84s 2026-03-16 00:47:55.175121 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.74s 2026-03-16 00:47:55.175126 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.62s 2026-03-16 00:47:55.175131 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.56s 2026-03-16 00:47:55.175136 | orchestrator | 2026-03-16 00:47:55.175142 | orchestrator | 2026-03-16 00:47:55.175147 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 00:47:55.175157 | orchestrator | 2026-03-16 00:47:55.175162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 00:47:55.175167 | orchestrator | Monday 16 March 2026 00:46:20 +0000 (0:00:00.837) 0:00:00.837 ********** 2026-03-16 00:47:55.175175 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-16 00:47:55.175181 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-16 00:47:55.175186 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-16 00:47:55.175191 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-16 00:47:55.175197 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-16 00:47:55.175201 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-16 00:47:55.175207 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-16 00:47:55.175212 | orchestrator | 2026-03-16 00:47:55.175217 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-16 00:47:55.175229 | orchestrator | 2026-03-16 00:47:55.175236 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-16 00:47:55.175241 | orchestrator | Monday 16 March 2026 00:46:23 +0000 (0:00:02.523) 0:00:03.360 ********** 2026-03-16 00:47:55.175256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:47:55.175263 | orchestrator | 2026-03-16 00:47:55.175269 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-16 00:47:55.175275 | orchestrator | Monday 16 March 2026 00:46:24 +0000 (0:00:01.428) 0:00:04.789 ********** 2026-03-16 00:47:55.175281 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:47:55.175285 | orchestrator | ok: [testbed-manager] 2026-03-16 00:47:55.175289 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:47:55.175294 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:47:55.175298 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:47:55.175309 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:47:55.175315 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:47:55.175321 | orchestrator | 2026-03-16 00:47:55.175326 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-16 00:47:55.175332 | orchestrator | Monday 16 March 2026 00:46:27 +0000 (0:00:02.573) 0:00:07.362 ********** 2026-03-16 00:47:55.175336 | orchestrator | ok: [testbed-manager] 2026-03-16 00:47:55.175339 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:47:55.175343 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:47:55.175346 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:47:55.175351 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:47:55.175356 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:47:55.175363 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:47:55.175369 | orchestrator | 2026-03-16 00:47:55.175374 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-16 00:47:55.175379 | orchestrator | Monday 16 March 2026 00:46:30 +0000 (0:00:03.450) 0:00:10.813 ********** 2026-03-16 00:47:55.175384 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.175389 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:47:55.175394 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:47:55.175479 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:47:55.175484 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:47:55.175487 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:47:55.175490 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:47:55.175494 | orchestrator | 2026-03-16 00:47:55.175497 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-16 00:47:55.175500 | orchestrator | Monday 16 March 2026 00:46:33 +0000 (0:00:02.864) 0:00:13.678 ********** 2026-03-16 00:47:55.175503 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.175506 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:47:55.175513 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:47:55.175516 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:47:55.175519 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:47:55.175522 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:47:55.175525 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:47:55.175529 | orchestrator | 2026-03-16 00:47:55.175532 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-16 00:47:55.175535 | orchestrator | Monday 16 March 2026 00:46:44 +0000 (0:00:11.375) 0:00:25.054 ********** 2026-03-16 00:47:55.175538 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:47:55.175541 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:47:55.175544 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:47:55.175547 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.175550 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:47:55.175553 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:47:55.175556 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:47:55.175559 | orchestrator | 2026-03-16 00:47:55.175563 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-16 00:47:55.175566 | orchestrator | Monday 16 March 2026 00:47:23 +0000 (0:00:38.735) 0:01:03.789 ********** 2026-03-16 00:47:55.175569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:47:55.175573 | orchestrator | 2026-03-16 00:47:55.175576 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-16 00:47:55.175579 | orchestrator | Monday 16 March 2026 00:47:25 +0000 (0:00:01.624) 0:01:05.414 ********** 2026-03-16 00:47:55.175583 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-16 00:47:55.175586 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-16 00:47:55.175589 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-16 00:47:55.175592 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-16 00:47:55.175596 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-16 00:47:55.175599 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-16 00:47:55.175602 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-16 00:47:55.175605 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-16 00:47:55.175608 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-16 00:47:55.175611 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-16 00:47:55.175614 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-16 00:47:55.175617 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-16 00:47:55.175620 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-16 00:47:55.175623 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-16 00:47:55.175626 | orchestrator | 2026-03-16 00:47:55.175642 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-16 00:47:55.175646 | orchestrator | Monday 16 March 2026 00:47:31 +0000 (0:00:06.381) 0:01:11.795 ********** 2026-03-16 00:47:55.175649 | orchestrator | ok: [testbed-manager] 2026-03-16 00:47:55.175652 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:47:55.175655 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:47:55.175658 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:47:55.175661 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:47:55.175664 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:47:55.175667 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:47:55.175670 | orchestrator | 2026-03-16 00:47:55.175673 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-16 00:47:55.175679 | orchestrator | Monday 16 March 2026 00:47:32 +0000 (0:00:01.318) 0:01:13.113 ********** 2026-03-16 00:47:55.175683 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:47:55.175686 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.175691 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:47:55.175694 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:47:55.175697 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:47:55.175700 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:47:55.175703 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:47:55.175706 | orchestrator | 2026-03-16 00:47:55.175709 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-16 00:47:55.175716 | orchestrator | Monday 16 March 2026 00:47:34 +0000 (0:00:01.785) 0:01:14.899 ********** 2026-03-16 00:47:55.175719 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:47:55.175722 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:47:55.175725 | orchestrator | ok: [testbed-manager] 2026-03-16 00:47:55.175728 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:47:55.175732 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:47:55.175735 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:47:55.175738 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:47:55.175741 | orchestrator | 2026-03-16 00:47:55.175744 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-16 00:47:55.175747 | orchestrator | Monday 16 March 2026 00:47:36 +0000 (0:00:01.282) 0:01:16.183 ********** 2026-03-16 00:47:55.175750 | orchestrator | ok: [testbed-manager] 2026-03-16 00:47:55.175753 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:47:55.175756 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:47:55.175759 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:47:55.175762 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:47:55.175765 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:47:55.175768 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:47:55.175771 | orchestrator | 2026-03-16 00:47:55.175774 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-16 00:47:55.175777 | orchestrator | Monday 16 March 2026 00:47:38 +0000 (0:00:02.027) 0:01:18.211 ********** 2026-03-16 00:47:55.175780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-16 00:47:55.175785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:47:55.175788 | orchestrator | 2026-03-16 00:47:55.175792 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-16 00:47:55.175795 | orchestrator | Monday 16 March 2026 00:47:39 +0000 (0:00:01.251) 0:01:19.463 ********** 2026-03-16 00:47:55.175798 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.175801 | orchestrator | 2026-03-16 00:47:55.175804 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-16 00:47:55.175807 | orchestrator | Monday 16 March 2026 00:47:41 +0000 (0:00:01.744) 0:01:21.207 ********** 2026-03-16 00:47:55.175810 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:47:55.175813 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:47:55.175816 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:47:55.175819 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:47:55.175822 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:47:55.175825 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:47:55.175828 | orchestrator | changed: [testbed-manager] 2026-03-16 00:47:55.175831 | orchestrator | 2026-03-16 00:47:55.175835 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:47:55.175838 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:47:55.175841 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:47:55.175844 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:47:55.175851 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:47:55.175854 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:47:55.175857 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:47:55.175860 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:47:55.175863 | orchestrator | 2026-03-16 00:47:55.175866 | orchestrator | 2026-03-16 00:47:55.175869 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:47:55.175872 | orchestrator | Monday 16 March 2026 00:47:52 +0000 (0:00:11.722) 0:01:32.930 ********** 2026-03-16 00:47:55.175875 | orchestrator | =============================================================================== 2026-03-16 00:47:55.175878 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 38.74s 2026-03-16 00:47:55.175882 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.73s 2026-03-16 00:47:55.175885 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.38s 2026-03-16 00:47:55.175888 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.38s 2026-03-16 00:47:55.175891 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.45s 2026-03-16 00:47:55.175894 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.86s 2026-03-16 00:47:55.175899 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.57s 2026-03-16 00:47:55.175902 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.52s 2026-03-16 00:47:55.175905 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.03s 2026-03-16 00:47:55.175908 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.79s 2026-03-16 00:47:55.175911 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.74s 2026-03-16 00:47:55.175916 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.62s 2026-03-16 00:47:55.175919 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.43s 2026-03-16 00:47:55.175922 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.32s 2026-03-16 00:47:55.175925 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.28s 2026-03-16 00:47:55.175928 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.25s 2026-03-16 00:47:55.175932 | orchestrator | 2026-03-16 00:47:55 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:55.178002 | orchestrator | 2026-03-16 00:47:55 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:55.178979 | orchestrator | 2026-03-16 00:47:55 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:55.180218 | orchestrator | 2026-03-16 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:47:58.266931 | orchestrator | 2026-03-16 00:47:58 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:47:58.275825 | orchestrator | 2026-03-16 00:47:58 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:47:58.278570 | orchestrator | 2026-03-16 00:47:58 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:47:58.278609 | orchestrator | 2026-03-16 00:47:58 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:47:58.280914 | orchestrator | 2026-03-16 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:01.333294 | orchestrator | 2026-03-16 00:48:01 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:01.336248 | orchestrator | 2026-03-16 00:48:01 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:48:01.339789 | orchestrator | 2026-03-16 00:48:01 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:01.344067 | orchestrator | 2026-03-16 00:48:01 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state STARTED 2026-03-16 00:48:01.344099 | orchestrator | 2026-03-16 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:04.427907 | orchestrator | 2026-03-16 00:48:04 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:04.432582 | orchestrator | 2026-03-16 00:48:04 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:48:04.434419 | orchestrator | 2026-03-16 00:48:04 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:04.435998 | orchestrator | 2026-03-16 00:48:04 | INFO  | Task b7ba5b2c-9c37-42f7-b74d-4cd9806d3c2c is in state SUCCESS 2026-03-16 00:48:04.436035 | orchestrator | 2026-03-16 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:07.504802 | orchestrator | 2026-03-16 00:48:07 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:07.508351 | orchestrator | 2026-03-16 00:48:07 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:48:07.511588 | orchestrator | 2026-03-16 00:48:07 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:07.511643 | orchestrator | 2026-03-16 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:10.569158 | orchestrator | 2026-03-16 00:48:10 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:10.572087 | orchestrator | 2026-03-16 00:48:10 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:48:10.576532 | orchestrator | 2026-03-16 00:48:10 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:10.576580 | orchestrator | 2026-03-16 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:13.615941 | orchestrator | 2026-03-16 00:48:13 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:13.616107 | orchestrator | 2026-03-16 00:48:13 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:48:13.617333 | orchestrator | 2026-03-16 00:48:13 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:13.617364 | orchestrator | 2026-03-16 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:16.661548 | orchestrator | 2026-03-16 00:48:16 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:16.662982 | orchestrator | 2026-03-16 00:48:16 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:48:16.664436 | orchestrator | 2026-03-16 00:48:16 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:16.664469 | orchestrator | 2026-03-16 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:19.705747 | orchestrator | 2026-03-16 00:48:19 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:19.705794 | orchestrator | 2026-03-16 00:48:19 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:48:19.708555 | orchestrator | 2026-03-16 00:48:19 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:19.708590 | orchestrator | 2026-03-16 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:22.743668 | orchestrator | 2026-03-16 00:48:22 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:22.743742 | orchestrator | 2026-03-16 00:48:22 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:48:22.743754 | orchestrator | 2026-03-16 00:48:22 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:22.743762 | orchestrator | 2026-03-16 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:25.786491 | orchestrator | 2026-03-16 00:48:25 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:25.787970 | orchestrator | 2026-03-16 00:48:25 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state STARTED 2026-03-16 00:48:25.789149 | orchestrator | 2026-03-16 00:48:25 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:25.789194 | orchestrator | 2026-03-16 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:28.813413 | orchestrator | 2026-03-16 00:48:28 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:28.822496 | orchestrator | 2026-03-16 00:48:28 | INFO  | Task db5ca6bf-af7e-48ca-a559-52461edeec37 is in state SUCCESS 2026-03-16 00:48:28.823990 | orchestrator | 2026-03-16 00:48:28.824097 | orchestrator | 2026-03-16 00:48:28.824106 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-16 00:48:28.824111 | orchestrator | 2026-03-16 00:48:28.824115 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-16 00:48:28.824121 | orchestrator | Monday 16 March 2026 00:46:41 +0000 (0:00:00.320) 0:00:00.320 ********** 2026-03-16 00:48:28.824125 | orchestrator | ok: [testbed-manager] 2026-03-16 00:48:28.824130 | orchestrator | 2026-03-16 00:48:28.824134 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-16 00:48:28.824138 | orchestrator | Monday 16 March 2026 00:46:42 +0000 (0:00:01.360) 0:00:01.681 ********** 2026-03-16 00:48:28.824143 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-16 00:48:28.824148 | orchestrator | 2026-03-16 00:48:28.824152 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-16 00:48:28.824156 | orchestrator | Monday 16 March 2026 00:46:43 +0000 (0:00:00.691) 0:00:02.372 ********** 2026-03-16 00:48:28.824160 | orchestrator | changed: [testbed-manager] 2026-03-16 00:48:28.824164 | orchestrator | 2026-03-16 00:48:28.824167 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-16 00:48:28.824171 | orchestrator | Monday 16 March 2026 00:46:44 +0000 (0:00:01.004) 0:00:03.377 ********** 2026-03-16 00:48:28.824176 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-16 00:48:28.824180 | orchestrator | ok: [testbed-manager] 2026-03-16 00:48:28.824184 | orchestrator | 2026-03-16 00:48:28.824188 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-16 00:48:28.824191 | orchestrator | Monday 16 March 2026 00:47:53 +0000 (0:01:09.717) 0:01:13.094 ********** 2026-03-16 00:48:28.824195 | orchestrator | changed: [testbed-manager] 2026-03-16 00:48:28.824199 | orchestrator | 2026-03-16 00:48:28.824203 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:48:28.824207 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:48:28.824213 | orchestrator | 2026-03-16 00:48:28.824231 | orchestrator | 2026-03-16 00:48:28.824236 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:48:28.824254 | orchestrator | Monday 16 March 2026 00:48:03 +0000 (0:00:09.460) 0:01:22.555 ********** 2026-03-16 00:48:28.824258 | orchestrator | =============================================================================== 2026-03-16 00:48:28.824262 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 69.72s 2026-03-16 00:48:28.824266 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 9.46s 2026-03-16 00:48:28.824270 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.36s 2026-03-16 00:48:28.824274 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.00s 2026-03-16 00:48:28.824278 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.69s 2026-03-16 00:48:28.824281 | orchestrator | 2026-03-16 00:48:28.824317 | orchestrator | 2026-03-16 00:48:28.824321 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-16 00:48:28.824325 | orchestrator | 2026-03-16 00:48:28.824329 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-16 00:48:28.824332 | orchestrator | Monday 16 March 2026 00:46:12 +0000 (0:00:00.207) 0:00:00.207 ********** 2026-03-16 00:48:28.824339 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:48:28.824347 | orchestrator | 2026-03-16 00:48:28.824352 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-16 00:48:28.824358 | orchestrator | Monday 16 March 2026 00:46:13 +0000 (0:00:01.069) 0:00:01.277 ********** 2026-03-16 00:48:28.824363 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-16 00:48:28.824369 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-16 00:48:28.824375 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-16 00:48:28.824381 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-16 00:48:28.824387 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-16 00:48:28.824393 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-16 00:48:28.824407 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-16 00:48:28.824414 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-16 00:48:28.824420 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-16 00:48:28.824426 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-16 00:48:28.824432 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-16 00:48:28.824438 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-16 00:48:28.824445 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-16 00:48:28.824451 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-16 00:48:28.824457 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-16 00:48:28.824463 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-16 00:48:28.824481 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-16 00:48:28.824488 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-16 00:48:28.824494 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-16 00:48:28.824500 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-16 00:48:28.824506 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-16 00:48:28.824519 | orchestrator | 2026-03-16 00:48:28.824524 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-16 00:48:28.824530 | orchestrator | Monday 16 March 2026 00:46:18 +0000 (0:00:04.696) 0:00:05.973 ********** 2026-03-16 00:48:28.824536 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:48:28.824543 | orchestrator | 2026-03-16 00:48:28.824549 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-16 00:48:28.824555 | orchestrator | Monday 16 March 2026 00:46:19 +0000 (0:00:01.462) 0:00:07.436 ********** 2026-03-16 00:48:28.824564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.824577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.824585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.824655 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.824662 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.824688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.824702 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.824710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824736 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824742 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824761 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824794 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824810 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.824826 | orchestrator | 2026-03-16 00:48:28.824831 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-16 00:48:28.824838 | orchestrator | Monday 16 March 2026 00:46:25 +0000 (0:00:05.098) 0:00:12.534 ********** 2026-03-16 00:48:28.824844 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.824848 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.824853 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.824858 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:48:28.824865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.824872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.824878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.824885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.824899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.824905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.824910 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:48:28.824915 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:48:28.824921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.824933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.824944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.824950 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:48:28.824955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.824962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.824976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.824983 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:48:28.824989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.824995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825006 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:48:28.825015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.825021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825036 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:48:28.825042 | orchestrator | 2026-03-16 00:48:28.825048 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-16 00:48:28.825054 | orchestrator | Monday 16 March 2026 00:46:26 +0000 (0:00:01.723) 0:00:14.257 ********** 2026-03-16 00:48:28.825060 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.825070 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825077 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825083 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:48:28.825087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.825122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.825141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825145 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:48:28.825518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.825553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825570 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:48:28.825575 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:48:28.825579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.825666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.825698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825712 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:48:28.825719 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:48:28.825726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-16 00:48:28.825737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.825756 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:48:28.825762 | orchestrator | 2026-03-16 00:48:28.825769 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-16 00:48:28.825776 | orchestrator | Monday 16 March 2026 00:46:30 +0000 (0:00:03.547) 0:00:17.805 ********** 2026-03-16 00:48:28.825783 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:48:28.825790 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:48:28.825797 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:48:28.825804 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:48:28.825810 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:48:28.825817 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:48:28.825824 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:48:28.825831 | orchestrator | 2026-03-16 00:48:28.825838 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-16 00:48:28.825845 | orchestrator | Monday 16 March 2026 00:46:31 +0000 (0:00:01.492) 0:00:19.297 ********** 2026-03-16 00:48:28.825852 | orchestrator | skipping: [testbed-manager] 2026-03-16 00:48:28.825859 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:48:28.825866 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:48:28.825873 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:48:28.825879 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:48:28.825886 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:48:28.825893 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:48:28.825900 | orchestrator | 2026-03-16 00:48:28.825907 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-16 00:48:28.825914 | orchestrator | Monday 16 March 2026 00:46:33 +0000 (0:00:01.634) 0:00:20.932 ********** 2026-03-16 00:48:28.825930 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.825938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.825945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.825956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.825966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.825973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.825980 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.825991 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826000 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826105 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826172 | orchestrator | 2026-03-16 00:48:28.826179 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-16 00:48:28.826186 | orchestrator | Monday 16 March 2026 00:46:40 +0000 (0:00:06.619) 0:00:27.551 ********** 2026-03-16 00:48:28.826193 | orchestrator | [WARNING]: Skipped 2026-03-16 00:48:28.826201 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-16 00:48:28.826208 | orchestrator | to this access issue: 2026-03-16 00:48:28.826215 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-16 00:48:28.826222 | orchestrator | directory 2026-03-16 00:48:28.826228 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 00:48:28.826235 | orchestrator | 2026-03-16 00:48:28.826241 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-16 00:48:28.826248 | orchestrator | Monday 16 March 2026 00:46:43 +0000 (0:00:03.086) 0:00:30.638 ********** 2026-03-16 00:48:28.826254 | orchestrator | [WARNING]: Skipped 2026-03-16 00:48:28.826261 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-16 00:48:28.826267 | orchestrator | to this access issue: 2026-03-16 00:48:28.826273 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-16 00:48:28.826280 | orchestrator | directory 2026-03-16 00:48:28.826287 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 00:48:28.826293 | orchestrator | 2026-03-16 00:48:28.826300 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-16 00:48:28.826307 | orchestrator | Monday 16 March 2026 00:46:44 +0000 (0:00:00.836) 0:00:31.474 ********** 2026-03-16 00:48:28.826314 | orchestrator | [WARNING]: Skipped 2026-03-16 00:48:28.826320 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-16 00:48:28.826327 | orchestrator | to this access issue: 2026-03-16 00:48:28.826334 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-16 00:48:28.826341 | orchestrator | directory 2026-03-16 00:48:28.826353 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 00:48:28.826361 | orchestrator | 2026-03-16 00:48:28.826372 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-16 00:48:28.826380 | orchestrator | Monday 16 March 2026 00:46:44 +0000 (0:00:00.855) 0:00:32.329 ********** 2026-03-16 00:48:28.826387 | orchestrator | [WARNING]: Skipped 2026-03-16 00:48:28.826394 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-16 00:48:28.826401 | orchestrator | to this access issue: 2026-03-16 00:48:28.826408 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-16 00:48:28.826415 | orchestrator | directory 2026-03-16 00:48:28.826422 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 00:48:28.826429 | orchestrator | 2026-03-16 00:48:28.826436 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-16 00:48:28.826444 | orchestrator | Monday 16 March 2026 00:46:46 +0000 (0:00:01.555) 0:00:33.885 ********** 2026-03-16 00:48:28.826451 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:48:28.826459 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:48:28.826464 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:48:28.826468 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:48:28.826473 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:48:28.826477 | orchestrator | changed: [testbed-manager] 2026-03-16 00:48:28.826481 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:48:28.826486 | orchestrator | 2026-03-16 00:48:28.826490 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-16 00:48:28.826494 | orchestrator | Monday 16 March 2026 00:46:50 +0000 (0:00:03.787) 0:00:37.673 ********** 2026-03-16 00:48:28.826499 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-16 00:48:28.826504 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-16 00:48:28.826508 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-16 00:48:28.826513 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-16 00:48:28.826517 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-16 00:48:28.826521 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-16 00:48:28.826526 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-16 00:48:28.826624 | orchestrator | 2026-03-16 00:48:28.826636 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-16 00:48:28.826640 | orchestrator | Monday 16 March 2026 00:46:53 +0000 (0:00:03.704) 0:00:41.377 ********** 2026-03-16 00:48:28.826644 | orchestrator | changed: [testbed-manager] 2026-03-16 00:48:28.826648 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:48:28.826652 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:48:28.826655 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:48:28.826659 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:48:28.826663 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:48:28.826666 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:48:28.826670 | orchestrator | 2026-03-16 00:48:28.826674 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-16 00:48:28.826678 | orchestrator | Monday 16 March 2026 00:46:56 +0000 (0:00:03.011) 0:00:44.389 ********** 2026-03-16 00:48:28.826682 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.826695 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826708 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.826718 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.826728 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826736 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.826747 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.826755 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826759 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.826773 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826777 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:48:28.826796 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826803 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826810 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.826816 | orchestrator | 2026-03-16 00:48:28.826822 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-16 00:48:28.826829 | orchestrator | Monday 16 March 2026 00:46:59 +0000 (0:00:02.833) 0:00:47.222 ********** 2026-03-16 00:48:28.826836 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-16 00:48:28.826842 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-16 00:48:28.826848 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-16 00:48:28.826854 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-16 00:48:28.826863 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-16 00:48:28.826871 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-16 00:48:28.826882 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-16 00:48:28.826888 | orchestrator | 2026-03-16 00:48:28.826895 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-16 00:48:28.826901 | orchestrator | Monday 16 March 2026 00:47:03 +0000 (0:00:03.846) 0:00:51.069 ********** 2026-03-16 00:48:28.826907 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-16 00:48:28.826912 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-16 00:48:28.826918 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-16 00:48:28.826924 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-16 00:48:28.826929 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-16 00:48:28.826935 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-16 00:48:28.826941 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-16 00:48:28.826947 | orchestrator | 2026-03-16 00:48:28.826953 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-16 00:48:28.826960 | orchestrator | Monday 16 March 2026 00:47:05 +0000 (0:00:02.129) 0:00:53.199 ********** 2026-03-16 00:48:28.826967 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.826998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.827014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827021 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827027 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.827033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827043 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-16 00:48:28.827050 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827082 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:48:28.827135 | orchestrator | 2026-03-16 00:48:28.827141 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-16 00:48:28.827147 | orchestrator | Monday 16 March 2026 00:47:09 +0000 (0:00:04.146) 0:00:57.346 ********** 2026-03-16 00:48:28.827153 | orchestrator | changed: [testbed-manager] 2026-03-16 00:48:28.827159 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:48:28.827166 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:48:28.827172 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:48:28.827178 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:48:28.827185 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:48:28.827191 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:48:28.827197 | orchestrator | 2026-03-16 00:48:28.827203 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-16 00:48:28.827210 | orchestrator | Monday 16 March 2026 00:47:11 +0000 (0:00:02.038) 0:00:59.385 ********** 2026-03-16 00:48:28.827216 | orchestrator | changed: [testbed-manager] 2026-03-16 00:48:28.827222 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:48:28.827229 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:48:28.827234 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:48:28.827238 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:48:28.827242 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:48:28.827245 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:48:28.827249 | orchestrator | 2026-03-16 00:48:28.827362 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-16 00:48:28.827369 | orchestrator | Monday 16 March 2026 00:47:13 +0000 (0:00:01.234) 0:01:00.619 ********** 2026-03-16 00:48:28.827376 | orchestrator | 2026-03-16 00:48:28.827382 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-16 00:48:28.827389 | orchestrator | Monday 16 March 2026 00:47:13 +0000 (0:00:00.077) 0:01:00.697 ********** 2026-03-16 00:48:28.827395 | orchestrator | 2026-03-16 00:48:28.827402 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-16 00:48:28.827408 | orchestrator | Monday 16 March 2026 00:47:13 +0000 (0:00:00.063) 0:01:00.761 ********** 2026-03-16 00:48:28.827414 | orchestrator | 2026-03-16 00:48:28.827421 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-16 00:48:28.827427 | orchestrator | Monday 16 March 2026 00:47:13 +0000 (0:00:00.217) 0:01:00.978 ********** 2026-03-16 00:48:28.827433 | orchestrator | 2026-03-16 00:48:28.827439 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-16 00:48:28.827446 | orchestrator | Monday 16 March 2026 00:47:13 +0000 (0:00:00.066) 0:01:01.045 ********** 2026-03-16 00:48:28.827452 | orchestrator | 2026-03-16 00:48:28.827459 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-16 00:48:28.827465 | orchestrator | Monday 16 March 2026 00:47:13 +0000 (0:00:00.061) 0:01:01.106 ********** 2026-03-16 00:48:28.827472 | orchestrator | 2026-03-16 00:48:28.827478 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-16 00:48:28.827491 | orchestrator | Monday 16 March 2026 00:47:13 +0000 (0:00:00.060) 0:01:01.167 ********** 2026-03-16 00:48:28.827497 | orchestrator | 2026-03-16 00:48:28.827503 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-16 00:48:28.827509 | orchestrator | Monday 16 March 2026 00:47:13 +0000 (0:00:00.081) 0:01:01.248 ********** 2026-03-16 00:48:28.827521 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:48:28.827529 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:48:28.827536 | orchestrator | changed: [testbed-manager] 2026-03-16 00:48:28.827542 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:48:28.827548 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:48:28.827554 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:48:28.827560 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:48:28.827567 | orchestrator | 2026-03-16 00:48:28.827573 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-16 00:48:28.827580 | orchestrator | Monday 16 March 2026 00:47:44 +0000 (0:00:30.501) 0:01:31.750 ********** 2026-03-16 00:48:28.827612 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:48:28.827620 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:48:28.827626 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:48:28.827632 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:48:28.827639 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:48:28.827645 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:48:28.827652 | orchestrator | changed: [testbed-manager] 2026-03-16 00:48:28.827658 | orchestrator | 2026-03-16 00:48:28.827664 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-16 00:48:28.827671 | orchestrator | Monday 16 March 2026 00:48:19 +0000 (0:00:34.885) 0:02:06.636 ********** 2026-03-16 00:48:28.827677 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:48:28.827684 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:48:28.827690 | orchestrator | ok: [testbed-manager] 2026-03-16 00:48:28.827696 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:48:28.827703 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:48:28.827709 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:48:28.827716 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:48:28.827722 | orchestrator | 2026-03-16 00:48:28.827729 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-16 00:48:28.827735 | orchestrator | Monday 16 March 2026 00:48:21 +0000 (0:00:02.404) 0:02:09.041 ********** 2026-03-16 00:48:28.827742 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:48:28.827748 | orchestrator | changed: [testbed-manager] 2026-03-16 00:48:28.827754 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:48:28.827761 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:48:28.827767 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:48:28.827774 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:48:28.827780 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:48:28.827786 | orchestrator | 2026-03-16 00:48:28.827792 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:48:28.827799 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-16 00:48:28.827812 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-16 00:48:28.827819 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-16 00:48:28.827825 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-16 00:48:28.827832 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-16 00:48:28.827838 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-16 00:48:28.827849 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-16 00:48:28.827856 | orchestrator | 2026-03-16 00:48:28.827862 | orchestrator | 2026-03-16 00:48:28.827869 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:48:28.827875 | orchestrator | Monday 16 March 2026 00:48:27 +0000 (0:00:05.710) 0:02:14.752 ********** 2026-03-16 00:48:28.827882 | orchestrator | =============================================================================== 2026-03-16 00:48:28.827888 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.89s 2026-03-16 00:48:28.827894 | orchestrator | common : Restart fluentd container ------------------------------------- 30.50s 2026-03-16 00:48:28.827901 | orchestrator | common : Copying over config.json files for services -------------------- 6.62s 2026-03-16 00:48:28.827907 | orchestrator | common : Restart cron container ----------------------------------------- 5.71s 2026-03-16 00:48:28.827913 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.10s 2026-03-16 00:48:28.827920 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.70s 2026-03-16 00:48:28.827926 | orchestrator | common : Check common containers ---------------------------------------- 4.15s 2026-03-16 00:48:28.827933 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.85s 2026-03-16 00:48:28.827939 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.79s 2026-03-16 00:48:28.827946 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.70s 2026-03-16 00:48:28.827956 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.55s 2026-03-16 00:48:28.827963 | orchestrator | common : Find custom fluentd input config files ------------------------- 3.09s 2026-03-16 00:48:28.827969 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.01s 2026-03-16 00:48:28.827976 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.83s 2026-03-16 00:48:28.827986 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.40s 2026-03-16 00:48:28.827993 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.13s 2026-03-16 00:48:28.828000 | orchestrator | common : Creating log volume -------------------------------------------- 2.04s 2026-03-16 00:48:28.828007 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.72s 2026-03-16 00:48:28.828013 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.63s 2026-03-16 00:48:28.828018 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.56s 2026-03-16 00:48:28.828025 | orchestrator | 2026-03-16 00:48:28 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:48:28.828031 | orchestrator | 2026-03-16 00:48:28 | INFO  | Task bc8a6421-cc97-49cf-8d7b-f792e9059bee is in state STARTED 2026-03-16 00:48:28.828037 | orchestrator | 2026-03-16 00:48:28 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:28.828043 | orchestrator | 2026-03-16 00:48:28 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:48:28.828049 | orchestrator | 2026-03-16 00:48:28 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:48:28.828056 | orchestrator | 2026-03-16 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:31.850924 | orchestrator | 2026-03-16 00:48:31 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:31.851383 | orchestrator | 2026-03-16 00:48:31 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:48:31.851910 | orchestrator | 2026-03-16 00:48:31 | INFO  | Task bc8a6421-cc97-49cf-8d7b-f792e9059bee is in state STARTED 2026-03-16 00:48:31.852950 | orchestrator | 2026-03-16 00:48:31 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:31.855905 | orchestrator | 2026-03-16 00:48:31 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:48:31.856450 | orchestrator | 2026-03-16 00:48:31 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:48:31.856476 | orchestrator | 2026-03-16 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:34.875902 | orchestrator | 2026-03-16 00:48:34 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:34.876038 | orchestrator | 2026-03-16 00:48:34 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:48:34.876563 | orchestrator | 2026-03-16 00:48:34 | INFO  | Task bc8a6421-cc97-49cf-8d7b-f792e9059bee is in state STARTED 2026-03-16 00:48:34.877342 | orchestrator | 2026-03-16 00:48:34 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:34.877751 | orchestrator | 2026-03-16 00:48:34 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:48:34.878501 | orchestrator | 2026-03-16 00:48:34 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:48:34.878551 | orchestrator | 2026-03-16 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:37.905144 | orchestrator | 2026-03-16 00:48:37 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:37.906113 | orchestrator | 2026-03-16 00:48:37 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:48:37.906995 | orchestrator | 2026-03-16 00:48:37 | INFO  | Task bc8a6421-cc97-49cf-8d7b-f792e9059bee is in state STARTED 2026-03-16 00:48:37.908091 | orchestrator | 2026-03-16 00:48:37 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:37.911509 | orchestrator | 2026-03-16 00:48:37 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:48:37.911556 | orchestrator | 2026-03-16 00:48:37 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:48:37.911565 | orchestrator | 2026-03-16 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:40.957554 | orchestrator | 2026-03-16 00:48:40 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:40.958855 | orchestrator | 2026-03-16 00:48:40 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:48:40.959939 | orchestrator | 2026-03-16 00:48:40 | INFO  | Task bc8a6421-cc97-49cf-8d7b-f792e9059bee is in state STARTED 2026-03-16 00:48:40.960176 | orchestrator | 2026-03-16 00:48:40 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:40.961231 | orchestrator | 2026-03-16 00:48:40 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:48:40.962492 | orchestrator | 2026-03-16 00:48:40 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:48:40.962516 | orchestrator | 2026-03-16 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:43.989030 | orchestrator | 2026-03-16 00:48:43 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:43.989236 | orchestrator | 2026-03-16 00:48:43 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:48:43.989807 | orchestrator | 2026-03-16 00:48:43 | INFO  | Task bc8a6421-cc97-49cf-8d7b-f792e9059bee is in state SUCCESS 2026-03-16 00:48:43.990980 | orchestrator | 2026-03-16 00:48:43 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:43.991828 | orchestrator | 2026-03-16 00:48:43 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:48:43.992976 | orchestrator | 2026-03-16 00:48:43 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:48:43.993470 | orchestrator | 2026-03-16 00:48:43 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:48:43.993498 | orchestrator | 2026-03-16 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:47.032078 | orchestrator | 2026-03-16 00:48:47 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:47.038740 | orchestrator | 2026-03-16 00:48:47 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:48:47.038810 | orchestrator | 2026-03-16 00:48:47 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:47.038818 | orchestrator | 2026-03-16 00:48:47 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:48:47.038846 | orchestrator | 2026-03-16 00:48:47 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:48:47.038853 | orchestrator | 2026-03-16 00:48:47 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:48:47.038861 | orchestrator | 2026-03-16 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:50.093400 | orchestrator | 2026-03-16 00:48:50 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:50.096211 | orchestrator | 2026-03-16 00:48:50 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:48:50.096748 | orchestrator | 2026-03-16 00:48:50 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:50.097176 | orchestrator | 2026-03-16 00:48:50 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:48:50.097913 | orchestrator | 2026-03-16 00:48:50 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:48:50.098548 | orchestrator | 2026-03-16 00:48:50 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:48:50.098610 | orchestrator | 2026-03-16 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:53.166918 | orchestrator | 2026-03-16 00:48:53 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:53.167205 | orchestrator | 2026-03-16 00:48:53 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:48:53.167847 | orchestrator | 2026-03-16 00:48:53 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:53.172029 | orchestrator | 2026-03-16 00:48:53 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:48:53.172393 | orchestrator | 2026-03-16 00:48:53 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:48:53.175310 | orchestrator | 2026-03-16 00:48:53 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:48:53.175398 | orchestrator | 2026-03-16 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:56.203616 | orchestrator | 2026-03-16 00:48:56 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:56.204128 | orchestrator | 2026-03-16 00:48:56 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:48:56.205740 | orchestrator | 2026-03-16 00:48:56 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:56.206892 | orchestrator | 2026-03-16 00:48:56 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:48:56.207811 | orchestrator | 2026-03-16 00:48:56 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:48:56.208904 | orchestrator | 2026-03-16 00:48:56 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:48:56.209242 | orchestrator | 2026-03-16 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:48:59.246749 | orchestrator | 2026-03-16 00:48:59 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:48:59.254166 | orchestrator | 2026-03-16 00:48:59 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:48:59.258064 | orchestrator | 2026-03-16 00:48:59 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:48:59.261179 | orchestrator | 2026-03-16 00:48:59 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:48:59.264305 | orchestrator | 2026-03-16 00:48:59 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:48:59.270098 | orchestrator | 2026-03-16 00:48:59 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:48:59.270167 | orchestrator | 2026-03-16 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:02.305295 | orchestrator | 2026-03-16 00:49:02 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:02.305702 | orchestrator | 2026-03-16 00:49:02 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state STARTED 2026-03-16 00:49:02.306970 | orchestrator | 2026-03-16 00:49:02 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:02.307754 | orchestrator | 2026-03-16 00:49:02 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:02.308310 | orchestrator | 2026-03-16 00:49:02 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:49:02.309767 | orchestrator | 2026-03-16 00:49:02 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:02.309795 | orchestrator | 2026-03-16 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:05.347802 | orchestrator | 2026-03-16 00:49:05 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:05.349852 | orchestrator | 2026-03-16 00:49:05 | INFO  | Task c499bed2-2730-4cb8-b56b-58820929486a is in state SUCCESS 2026-03-16 00:49:05.350200 | orchestrator | 2026-03-16 00:49:05.350216 | orchestrator | 2026-03-16 00:49:05.350221 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 00:49:05.350226 | orchestrator | 2026-03-16 00:49:05.350230 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 00:49:05.350235 | orchestrator | Monday 16 March 2026 00:48:31 +0000 (0:00:00.241) 0:00:00.241 ********** 2026-03-16 00:49:05.350240 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:49:05.350246 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:49:05.350250 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:49:05.350254 | orchestrator | 2026-03-16 00:49:05.350258 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 00:49:05.350263 | orchestrator | Monday 16 March 2026 00:48:32 +0000 (0:00:00.394) 0:00:00.635 ********** 2026-03-16 00:49:05.350268 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-16 00:49:05.350298 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-16 00:49:05.350302 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-16 00:49:05.350306 | orchestrator | 2026-03-16 00:49:05.350310 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-16 00:49:05.350314 | orchestrator | 2026-03-16 00:49:05.350318 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-16 00:49:05.350322 | orchestrator | Monday 16 March 2026 00:48:32 +0000 (0:00:00.488) 0:00:01.124 ********** 2026-03-16 00:49:05.350327 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:49:05.350332 | orchestrator | 2026-03-16 00:49:05.350336 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-16 00:49:05.350340 | orchestrator | Monday 16 March 2026 00:48:33 +0000 (0:00:00.591) 0:00:01.716 ********** 2026-03-16 00:49:05.350345 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-16 00:49:05.350349 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-16 00:49:05.350353 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-16 00:49:05.350357 | orchestrator | 2026-03-16 00:49:05.350361 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-16 00:49:05.350365 | orchestrator | Monday 16 March 2026 00:48:34 +0000 (0:00:00.747) 0:00:02.464 ********** 2026-03-16 00:49:05.350368 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-16 00:49:05.350372 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-16 00:49:05.350376 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-16 00:49:05.350380 | orchestrator | 2026-03-16 00:49:05.350384 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-16 00:49:05.350388 | orchestrator | Monday 16 March 2026 00:48:36 +0000 (0:00:02.230) 0:00:04.695 ********** 2026-03-16 00:49:05.350392 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:49:05.350396 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:49:05.350400 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:49:05.350404 | orchestrator | 2026-03-16 00:49:05.350408 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-16 00:49:05.350412 | orchestrator | Monday 16 March 2026 00:48:38 +0000 (0:00:01.703) 0:00:06.398 ********** 2026-03-16 00:49:05.350416 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:49:05.350419 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:49:05.350423 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:49:05.350427 | orchestrator | 2026-03-16 00:49:05.350431 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:49:05.350435 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:49:05.350441 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:49:05.350445 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:49:05.350449 | orchestrator | 2026-03-16 00:49:05.350452 | orchestrator | 2026-03-16 00:49:05.350456 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:49:05.350460 | orchestrator | Monday 16 March 2026 00:48:41 +0000 (0:00:02.917) 0:00:09.316 ********** 2026-03-16 00:49:05.350464 | orchestrator | =============================================================================== 2026-03-16 00:49:05.350468 | orchestrator | memcached : Restart memcached container --------------------------------- 2.92s 2026-03-16 00:49:05.350472 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.23s 2026-03-16 00:49:05.350475 | orchestrator | memcached : Check memcached container ----------------------------------- 1.71s 2026-03-16 00:49:05.350479 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.75s 2026-03-16 00:49:05.350492 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.59s 2026-03-16 00:49:05.350496 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2026-03-16 00:49:05.350500 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2026-03-16 00:49:05.350503 | orchestrator | 2026-03-16 00:49:05.351643 | orchestrator | 2026-03-16 00:49:05.351706 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 00:49:05.351715 | orchestrator | 2026-03-16 00:49:05.351720 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 00:49:05.351725 | orchestrator | Monday 16 March 2026 00:48:32 +0000 (0:00:00.234) 0:00:00.234 ********** 2026-03-16 00:49:05.351730 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:49:05.351737 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:49:05.351741 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:49:05.351746 | orchestrator | 2026-03-16 00:49:05.351750 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 00:49:05.351755 | orchestrator | Monday 16 March 2026 00:48:32 +0000 (0:00:00.393) 0:00:00.628 ********** 2026-03-16 00:49:05.351759 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-16 00:49:05.351764 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-16 00:49:05.351769 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-16 00:49:05.351773 | orchestrator | 2026-03-16 00:49:05.351777 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-16 00:49:05.351781 | orchestrator | 2026-03-16 00:49:05.351785 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-16 00:49:05.351790 | orchestrator | Monday 16 March 2026 00:48:33 +0000 (0:00:00.511) 0:00:01.139 ********** 2026-03-16 00:49:05.351803 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:49:05.351809 | orchestrator | 2026-03-16 00:49:05.351813 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-16 00:49:05.351818 | orchestrator | Monday 16 March 2026 00:48:33 +0000 (0:00:00.598) 0:00:01.738 ********** 2026-03-16 00:49:05.351834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351941 | orchestrator | 2026-03-16 00:49:05.351946 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-16 00:49:05.351950 | orchestrator | Monday 16 March 2026 00:48:35 +0000 (0:00:01.402) 0:00:03.141 ********** 2026-03-16 00:49:05.351954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.351994 | orchestrator | 2026-03-16 00:49:05.352011 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-16 00:49:05.352016 | orchestrator | Monday 16 March 2026 00:48:37 +0000 (0:00:02.924) 0:00:06.065 ********** 2026-03-16 00:49:05.352020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352064 | orchestrator | 2026-03-16 00:49:05.352068 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-16 00:49:05.352072 | orchestrator | Monday 16 March 2026 00:48:40 +0000 (0:00:02.919) 0:00:08.985 ********** 2026-03-16 00:49:05.352077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-16 00:49:05.352130 | orchestrator | 2026-03-16 00:49:05.352134 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-16 00:49:05.352139 | orchestrator | Monday 16 March 2026 00:48:43 +0000 (0:00:02.148) 0:00:11.133 ********** 2026-03-16 00:49:05.352143 | orchestrator | 2026-03-16 00:49:05.352147 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-16 00:49:05.352152 | orchestrator | Monday 16 March 2026 00:48:43 +0000 (0:00:00.127) 0:00:11.260 ********** 2026-03-16 00:49:05.352156 | orchestrator | 2026-03-16 00:49:05.352160 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-16 00:49:05.352164 | orchestrator | Monday 16 March 2026 00:48:43 +0000 (0:00:00.110) 0:00:11.370 ********** 2026-03-16 00:49:05.352168 | orchestrator | 2026-03-16 00:49:05.352172 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-16 00:49:05.352177 | orchestrator | Monday 16 March 2026 00:48:43 +0000 (0:00:00.067) 0:00:11.438 ********** 2026-03-16 00:49:05.352182 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:49:05.352189 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:49:05.352196 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:49:05.352203 | orchestrator | 2026-03-16 00:49:05.352209 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-16 00:49:05.352216 | orchestrator | Monday 16 March 2026 00:48:51 +0000 (0:00:08.379) 0:00:19.818 ********** 2026-03-16 00:49:05.352229 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:49:05.352235 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:49:05.352241 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:49:05.352248 | orchestrator | 2026-03-16 00:49:05.352255 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:49:05.352263 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:49:05.352271 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:49:05.352278 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:49:05.352284 | orchestrator | 2026-03-16 00:49:05.352291 | orchestrator | 2026-03-16 00:49:05.352298 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:49:05.352305 | orchestrator | Monday 16 March 2026 00:49:01 +0000 (0:00:09.600) 0:00:29.419 ********** 2026-03-16 00:49:05.352312 | orchestrator | =============================================================================== 2026-03-16 00:49:05.352319 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.60s 2026-03-16 00:49:05.352327 | orchestrator | redis : Restart redis container ----------------------------------------- 8.38s 2026-03-16 00:49:05.352333 | orchestrator | redis : Copying over default config.json files -------------------------- 2.92s 2026-03-16 00:49:05.352338 | orchestrator | redis : Copying over redis config files --------------------------------- 2.92s 2026-03-16 00:49:05.352343 | orchestrator | redis : Check redis containers ------------------------------------------ 2.15s 2026-03-16 00:49:05.352347 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.40s 2026-03-16 00:49:05.352352 | orchestrator | redis : include_tasks --------------------------------------------------- 0.60s 2026-03-16 00:49:05.352358 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-03-16 00:49:05.352363 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2026-03-16 00:49:05.352368 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.31s 2026-03-16 00:49:05.353050 | orchestrator | 2026-03-16 00:49:05 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:05.355126 | orchestrator | 2026-03-16 00:49:05 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:05.356574 | orchestrator | 2026-03-16 00:49:05 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:49:05.358589 | orchestrator | 2026-03-16 00:49:05 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:05.358848 | orchestrator | 2026-03-16 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:08.442156 | orchestrator | 2026-03-16 00:49:08 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:08.443106 | orchestrator | 2026-03-16 00:49:08 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:08.444636 | orchestrator | 2026-03-16 00:49:08 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:08.445176 | orchestrator | 2026-03-16 00:49:08 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:49:08.448334 | orchestrator | 2026-03-16 00:49:08 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:08.448393 | orchestrator | 2026-03-16 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:11.493762 | orchestrator | 2026-03-16 00:49:11 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:11.493856 | orchestrator | 2026-03-16 00:49:11 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:11.493863 | orchestrator | 2026-03-16 00:49:11 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:11.493867 | orchestrator | 2026-03-16 00:49:11 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:49:11.493871 | orchestrator | 2026-03-16 00:49:11 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:11.493875 | orchestrator | 2026-03-16 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:14.585736 | orchestrator | 2026-03-16 00:49:14 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:14.585822 | orchestrator | 2026-03-16 00:49:14 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:14.585833 | orchestrator | 2026-03-16 00:49:14 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:14.585843 | orchestrator | 2026-03-16 00:49:14 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:49:14.585852 | orchestrator | 2026-03-16 00:49:14 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:14.585861 | orchestrator | 2026-03-16 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:17.714581 | orchestrator | 2026-03-16 00:49:17 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:17.714692 | orchestrator | 2026-03-16 00:49:17 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:17.719862 | orchestrator | 2026-03-16 00:49:17 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:17.720746 | orchestrator | 2026-03-16 00:49:17 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:49:17.721392 | orchestrator | 2026-03-16 00:49:17 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:17.721423 | orchestrator | 2026-03-16 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:20.789211 | orchestrator | 2026-03-16 00:49:20 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:20.791428 | orchestrator | 2026-03-16 00:49:20 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:20.792221 | orchestrator | 2026-03-16 00:49:20 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:20.792813 | orchestrator | 2026-03-16 00:49:20 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:49:20.793564 | orchestrator | 2026-03-16 00:49:20 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:20.794092 | orchestrator | 2026-03-16 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:23.844727 | orchestrator | 2026-03-16 00:49:23 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:23.844831 | orchestrator | 2026-03-16 00:49:23 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:23.844847 | orchestrator | 2026-03-16 00:49:23 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:23.844858 | orchestrator | 2026-03-16 00:49:23 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:49:23.844868 | orchestrator | 2026-03-16 00:49:23 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:23.844878 | orchestrator | 2026-03-16 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:26.907779 | orchestrator | 2026-03-16 00:49:26 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:26.907905 | orchestrator | 2026-03-16 00:49:26 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:26.907921 | orchestrator | 2026-03-16 00:49:26 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:26.907937 | orchestrator | 2026-03-16 00:49:26 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:49:26.907949 | orchestrator | 2026-03-16 00:49:26 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:26.907960 | orchestrator | 2026-03-16 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:29.921806 | orchestrator | 2026-03-16 00:49:29 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:29.921950 | orchestrator | 2026-03-16 00:49:29 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:29.922813 | orchestrator | 2026-03-16 00:49:29 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:29.925949 | orchestrator | 2026-03-16 00:49:29 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state STARTED 2026-03-16 00:49:29.926435 | orchestrator | 2026-03-16 00:49:29 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:29.926467 | orchestrator | 2026-03-16 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:33.067993 | orchestrator | 2026-03-16 00:49:33 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:33.071674 | orchestrator | 2026-03-16 00:49:33 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:49:33.073044 | orchestrator | 2026-03-16 00:49:33 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:33.074980 | orchestrator | 2026-03-16 00:49:33 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:33.079441 | orchestrator | 2026-03-16 00:49:33.079543 | orchestrator | 2026-03-16 00:49:33.079553 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 00:49:33.079558 | orchestrator | 2026-03-16 00:49:33.079562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 00:49:33.079567 | orchestrator | Monday 16 March 2026 00:48:32 +0000 (0:00:00.502) 0:00:00.502 ********** 2026-03-16 00:49:33.079571 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:49:33.079576 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:49:33.079581 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:49:33.079585 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:49:33.079589 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:49:33.079593 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:49:33.079597 | orchestrator | 2026-03-16 00:49:33.079601 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 00:49:33.079605 | orchestrator | Monday 16 March 2026 00:48:33 +0000 (0:00:00.872) 0:00:01.375 ********** 2026-03-16 00:49:33.079608 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-16 00:49:33.079613 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-16 00:49:33.079616 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-16 00:49:33.079620 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-16 00:49:33.079624 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-16 00:49:33.079629 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-16 00:49:33.079662 | orchestrator | 2026-03-16 00:49:33.079670 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-16 00:49:33.079676 | orchestrator | 2026-03-16 00:49:33.079683 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-16 00:49:33.079689 | orchestrator | Monday 16 March 2026 00:48:34 +0000 (0:00:00.834) 0:00:02.209 ********** 2026-03-16 00:49:33.079697 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:49:33.079704 | orchestrator | 2026-03-16 00:49:33.079708 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-16 00:49:33.079712 | orchestrator | Monday 16 March 2026 00:48:35 +0000 (0:00:01.414) 0:00:03.624 ********** 2026-03-16 00:49:33.079716 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-16 00:49:33.079720 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-16 00:49:33.079724 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-16 00:49:33.079728 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-16 00:49:33.079732 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-16 00:49:33.079735 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-16 00:49:33.079739 | orchestrator | 2026-03-16 00:49:33.079743 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-16 00:49:33.079746 | orchestrator | Monday 16 March 2026 00:48:36 +0000 (0:00:01.328) 0:00:04.952 ********** 2026-03-16 00:49:33.079750 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-16 00:49:33.079754 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-16 00:49:33.079757 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-16 00:49:33.079773 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-16 00:49:33.079777 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-16 00:49:33.079780 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-16 00:49:33.079784 | orchestrator | 2026-03-16 00:49:33.079788 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-16 00:49:33.079791 | orchestrator | Monday 16 March 2026 00:48:38 +0000 (0:00:01.432) 0:00:06.384 ********** 2026-03-16 00:49:33.079795 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-16 00:49:33.079799 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:49:33.079806 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-16 00:49:33.079815 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:49:33.079822 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-16 00:49:33.079827 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:49:33.079833 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-16 00:49:33.079839 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:49:33.079845 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-16 00:49:33.079850 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:49:33.079856 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-16 00:49:33.079862 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:49:33.079867 | orchestrator | 2026-03-16 00:49:33.079873 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-16 00:49:33.079878 | orchestrator | Monday 16 March 2026 00:48:39 +0000 (0:00:01.311) 0:00:07.696 ********** 2026-03-16 00:49:33.079883 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:49:33.079889 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:49:33.079895 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:49:33.079900 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:49:33.079906 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:49:33.079912 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:49:33.079918 | orchestrator | 2026-03-16 00:49:33.079930 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-16 00:49:33.079937 | orchestrator | Monday 16 March 2026 00:48:40 +0000 (0:00:00.782) 0:00:08.478 ********** 2026-03-16 00:49:33.079961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.079972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.079979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.079992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080030 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080051 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080067 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080074 | orchestrator | 2026-03-16 00:49:33.080080 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-16 00:49:33.080087 | orchestrator | Monday 16 March 2026 00:48:42 +0000 (0:00:02.063) 0:00:10.542 ********** 2026-03-16 00:49:33.080094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080139 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080195 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080210 | orchestrator | 2026-03-16 00:49:33.080217 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-16 00:49:33.080223 | orchestrator | Monday 16 March 2026 00:48:45 +0000 (0:00:02.830) 0:00:13.373 ********** 2026-03-16 00:49:33.080230 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:49:33.080237 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:49:33.080245 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:49:33.080252 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:49:33.080258 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:49:33.080265 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:49:33.080270 | orchestrator | 2026-03-16 00:49:33.080276 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-16 00:49:33.080282 | orchestrator | Monday 16 March 2026 00:48:46 +0000 (0:00:01.235) 0:00:14.609 ********** 2026-03-16 00:49:33.080288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080357 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080391 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080398 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-16 00:49:33.080413 | orchestrator | 2026-03-16 00:49:33.080420 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-16 00:49:33.080426 | orchestrator | Monday 16 March 2026 00:48:49 +0000 (0:00:02.867) 0:00:17.477 ********** 2026-03-16 00:49:33.080432 | orchestrator | 2026-03-16 00:49:33.080438 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-16 00:49:33.080448 | orchestrator | Monday 16 March 2026 00:48:49 +0000 (0:00:00.366) 0:00:17.843 ********** 2026-03-16 00:49:33.080455 | orchestrator | 2026-03-16 00:49:33.080461 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-16 00:49:33.080467 | orchestrator | Monday 16 March 2026 00:48:50 +0000 (0:00:00.129) 0:00:17.973 ********** 2026-03-16 00:49:33.080473 | orchestrator | 2026-03-16 00:49:33.080499 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-16 00:49:33.080506 | orchestrator | Monday 16 March 2026 00:48:50 +0000 (0:00:00.150) 0:00:18.123 ********** 2026-03-16 00:49:33.080512 | orchestrator | 2026-03-16 00:49:33.080521 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-16 00:49:33.080527 | orchestrator | Monday 16 March 2026 00:48:50 +0000 (0:00:00.124) 0:00:18.248 ********** 2026-03-16 00:49:33.080533 | orchestrator | 2026-03-16 00:49:33.080539 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-16 00:49:33.080545 | orchestrator | Monday 16 March 2026 00:48:50 +0000 (0:00:00.113) 0:00:18.361 ********** 2026-03-16 00:49:33.080552 | orchestrator | 2026-03-16 00:49:33.080558 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-16 00:49:33.080563 | orchestrator | Monday 16 March 2026 00:48:50 +0000 (0:00:00.139) 0:00:18.500 ********** 2026-03-16 00:49:33.080570 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:49:33.080576 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:49:33.080582 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:49:33.080588 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:49:33.080593 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:49:33.080599 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:49:33.080604 | orchestrator | 2026-03-16 00:49:33.080609 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-16 00:49:33.080615 | orchestrator | Monday 16 March 2026 00:49:00 +0000 (0:00:09.642) 0:00:28.143 ********** 2026-03-16 00:49:33.080621 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:49:33.080628 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:49:33.080634 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:49:33.080640 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:49:33.080646 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:49:33.080652 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:49:33.080658 | orchestrator | 2026-03-16 00:49:33.080664 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-16 00:49:33.080670 | orchestrator | Monday 16 March 2026 00:49:01 +0000 (0:00:01.255) 0:00:29.399 ********** 2026-03-16 00:49:33.080677 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:49:33.080683 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:49:33.080689 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:49:33.080695 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:49:33.080701 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:49:33.080707 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:49:33.080712 | orchestrator | 2026-03-16 00:49:33.080718 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-16 00:49:33.080724 | orchestrator | Monday 16 March 2026 00:49:06 +0000 (0:00:05.448) 0:00:34.847 ********** 2026-03-16 00:49:33.080736 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-16 00:49:33.080743 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-16 00:49:33.080749 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-16 00:49:33.080754 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-16 00:49:33.080761 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-16 00:49:33.080766 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-16 00:49:33.080779 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-16 00:49:33.080785 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-16 00:49:33.080792 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-16 00:49:33.080799 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-16 00:49:33.080805 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-16 00:49:33.080811 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-16 00:49:33.080818 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-16 00:49:33.080825 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-16 00:49:33.080831 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-16 00:49:33.080838 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-16 00:49:33.080845 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-16 00:49:33.080851 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-16 00:49:33.080858 | orchestrator | 2026-03-16 00:49:33.080865 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-16 00:49:33.080871 | orchestrator | Monday 16 March 2026 00:49:14 +0000 (0:00:08.067) 0:00:42.915 ********** 2026-03-16 00:49:33.080878 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-16 00:49:33.080885 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:49:33.080891 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-16 00:49:33.080901 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:49:33.080908 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-16 00:49:33.080913 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:49:33.080920 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-16 00:49:33.080926 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-16 00:49:33.080932 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-16 00:49:33.080939 | orchestrator | 2026-03-16 00:49:33.080945 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-16 00:49:33.080951 | orchestrator | Monday 16 March 2026 00:49:17 +0000 (0:00:02.774) 0:00:45.689 ********** 2026-03-16 00:49:33.080957 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-16 00:49:33.080963 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:49:33.080969 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-16 00:49:33.080975 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:49:33.080980 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-16 00:49:33.080986 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:49:33.080993 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-16 00:49:33.080999 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-16 00:49:33.081005 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-16 00:49:33.081011 | orchestrator | 2026-03-16 00:49:33.081016 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-16 00:49:33.081022 | orchestrator | Monday 16 March 2026 00:49:21 +0000 (0:00:04.072) 0:00:49.762 ********** 2026-03-16 00:49:33.081028 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:49:33.081039 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:49:33.081045 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:49:33.081051 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:49:33.081058 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:49:33.081064 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:49:33.081070 | orchestrator | 2026-03-16 00:49:33.081076 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:49:33.081082 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-16 00:49:33.081097 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-16 00:49:33.081104 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-16 00:49:33.081110 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-16 00:49:33.081117 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-16 00:49:33.081123 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-16 00:49:33.081130 | orchestrator | 2026-03-16 00:49:33.081135 | orchestrator | 2026-03-16 00:49:33.081142 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:49:33.081149 | orchestrator | Monday 16 March 2026 00:49:29 +0000 (0:00:08.155) 0:00:57.917 ********** 2026-03-16 00:49:33.081156 | orchestrator | =============================================================================== 2026-03-16 00:49:33.081162 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 13.60s 2026-03-16 00:49:33.081168 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.64s 2026-03-16 00:49:33.081174 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.07s 2026-03-16 00:49:33.081181 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.07s 2026-03-16 00:49:33.081187 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.87s 2026-03-16 00:49:33.081193 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.83s 2026-03-16 00:49:33.081199 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.77s 2026-03-16 00:49:33.081205 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.06s 2026-03-16 00:49:33.081212 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.43s 2026-03-16 00:49:33.081216 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.41s 2026-03-16 00:49:33.081220 | orchestrator | module-load : Load modules ---------------------------------------------- 1.33s 2026-03-16 00:49:33.081223 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.31s 2026-03-16 00:49:33.081227 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.26s 2026-03-16 00:49:33.081231 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.24s 2026-03-16 00:49:33.081234 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.02s 2026-03-16 00:49:33.081238 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2026-03-16 00:49:33.081242 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-03-16 00:49:33.081246 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.78s 2026-03-16 00:49:33.081253 | orchestrator | 2026-03-16 00:49:33 | INFO  | Task 3fdc93e1-6c1a-4e20-8357-48b2fa1aca99 is in state SUCCESS 2026-03-16 00:49:33.081261 | orchestrator | 2026-03-16 00:49:33 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:33.081265 | orchestrator | 2026-03-16 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:36.141720 | orchestrator | 2026-03-16 00:49:36 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:36.147890 | orchestrator | 2026-03-16 00:49:36 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:49:36.148437 | orchestrator | 2026-03-16 00:49:36 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:36.148977 | orchestrator | 2026-03-16 00:49:36 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:36.149759 | orchestrator | 2026-03-16 00:49:36 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:36.149796 | orchestrator | 2026-03-16 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:39.187294 | orchestrator | 2026-03-16 00:49:39 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:39.187424 | orchestrator | 2026-03-16 00:49:39 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:49:39.187988 | orchestrator | 2026-03-16 00:49:39 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:39.188502 | orchestrator | 2026-03-16 00:49:39 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:39.191020 | orchestrator | 2026-03-16 00:49:39 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:39.191073 | orchestrator | 2026-03-16 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:42.214543 | orchestrator | 2026-03-16 00:49:42 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:42.214901 | orchestrator | 2026-03-16 00:49:42 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:49:42.215716 | orchestrator | 2026-03-16 00:49:42 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:42.216529 | orchestrator | 2026-03-16 00:49:42 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:42.217148 | orchestrator | 2026-03-16 00:49:42 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:42.217167 | orchestrator | 2026-03-16 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:45.242049 | orchestrator | 2026-03-16 00:49:45 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:45.243918 | orchestrator | 2026-03-16 00:49:45 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:49:45.244341 | orchestrator | 2026-03-16 00:49:45 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:45.245255 | orchestrator | 2026-03-16 00:49:45 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:45.245925 | orchestrator | 2026-03-16 00:49:45 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:45.245955 | orchestrator | 2026-03-16 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:48.286172 | orchestrator | 2026-03-16 00:49:48 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:48.286271 | orchestrator | 2026-03-16 00:49:48 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:49:48.286320 | orchestrator | 2026-03-16 00:49:48 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:48.286735 | orchestrator | 2026-03-16 00:49:48 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:48.287431 | orchestrator | 2026-03-16 00:49:48 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:48.287526 | orchestrator | 2026-03-16 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:51.356108 | orchestrator | 2026-03-16 00:49:51 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:51.356306 | orchestrator | 2026-03-16 00:49:51 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:49:51.357085 | orchestrator | 2026-03-16 00:49:51 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:51.357711 | orchestrator | 2026-03-16 00:49:51 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:51.358602 | orchestrator | 2026-03-16 00:49:51 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:51.358625 | orchestrator | 2026-03-16 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:54.402976 | orchestrator | 2026-03-16 00:49:54 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:54.403251 | orchestrator | 2026-03-16 00:49:54 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:49:54.404396 | orchestrator | 2026-03-16 00:49:54 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:54.405503 | orchestrator | 2026-03-16 00:49:54 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:54.406586 | orchestrator | 2026-03-16 00:49:54 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:54.406627 | orchestrator | 2026-03-16 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:49:57.436822 | orchestrator | 2026-03-16 00:49:57 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:49:57.436943 | orchestrator | 2026-03-16 00:49:57 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:49:57.437670 | orchestrator | 2026-03-16 00:49:57 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:49:57.438140 | orchestrator | 2026-03-16 00:49:57 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:49:57.438825 | orchestrator | 2026-03-16 00:49:57 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:49:57.438840 | orchestrator | 2026-03-16 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:00.519343 | orchestrator | 2026-03-16 00:50:00 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:00.521861 | orchestrator | 2026-03-16 00:50:00 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:00.524385 | orchestrator | 2026-03-16 00:50:00 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:00.526376 | orchestrator | 2026-03-16 00:50:00 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:00.527991 | orchestrator | 2026-03-16 00:50:00 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:00.528046 | orchestrator | 2026-03-16 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:03.566508 | orchestrator | 2026-03-16 00:50:03 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:03.567669 | orchestrator | 2026-03-16 00:50:03 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:03.568295 | orchestrator | 2026-03-16 00:50:03 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:03.568712 | orchestrator | 2026-03-16 00:50:03 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:03.570070 | orchestrator | 2026-03-16 00:50:03 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:03.570091 | orchestrator | 2026-03-16 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:06.600061 | orchestrator | 2026-03-16 00:50:06 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:06.601168 | orchestrator | 2026-03-16 00:50:06 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:06.602305 | orchestrator | 2026-03-16 00:50:06 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:06.604586 | orchestrator | 2026-03-16 00:50:06 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:06.606232 | orchestrator | 2026-03-16 00:50:06 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:06.606334 | orchestrator | 2026-03-16 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:09.667971 | orchestrator | 2026-03-16 00:50:09 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:09.668349 | orchestrator | 2026-03-16 00:50:09 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:09.669838 | orchestrator | 2026-03-16 00:50:09 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:09.670698 | orchestrator | 2026-03-16 00:50:09 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:09.671510 | orchestrator | 2026-03-16 00:50:09 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:09.671571 | orchestrator | 2026-03-16 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:12.701082 | orchestrator | 2026-03-16 00:50:12 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:12.707361 | orchestrator | 2026-03-16 00:50:12 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:12.707487 | orchestrator | 2026-03-16 00:50:12 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:12.707495 | orchestrator | 2026-03-16 00:50:12 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:12.707501 | orchestrator | 2026-03-16 00:50:12 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:12.707507 | orchestrator | 2026-03-16 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:15.740326 | orchestrator | 2026-03-16 00:50:15 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:15.743585 | orchestrator | 2026-03-16 00:50:15 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:15.743709 | orchestrator | 2026-03-16 00:50:15 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:15.744709 | orchestrator | 2026-03-16 00:50:15 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:15.745679 | orchestrator | 2026-03-16 00:50:15 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:15.745791 | orchestrator | 2026-03-16 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:18.810836 | orchestrator | 2026-03-16 00:50:18 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:18.816327 | orchestrator | 2026-03-16 00:50:18 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:18.825163 | orchestrator | 2026-03-16 00:50:18 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:18.826595 | orchestrator | 2026-03-16 00:50:18 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:18.827466 | orchestrator | 2026-03-16 00:50:18 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:18.827496 | orchestrator | 2026-03-16 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:21.863650 | orchestrator | 2026-03-16 00:50:21 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:21.864919 | orchestrator | 2026-03-16 00:50:21 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:21.866785 | orchestrator | 2026-03-16 00:50:21 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:21.867663 | orchestrator | 2026-03-16 00:50:21 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:21.870044 | orchestrator | 2026-03-16 00:50:21 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:21.870079 | orchestrator | 2026-03-16 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:25.139706 | orchestrator | 2026-03-16 00:50:25 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:25.141500 | orchestrator | 2026-03-16 00:50:25 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:25.142717 | orchestrator | 2026-03-16 00:50:25 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:25.144504 | orchestrator | 2026-03-16 00:50:25 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:25.147716 | orchestrator | 2026-03-16 00:50:25 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:25.148254 | orchestrator | 2026-03-16 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:28.257755 | orchestrator | 2026-03-16 00:50:28 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:28.257838 | orchestrator | 2026-03-16 00:50:28 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:28.257844 | orchestrator | 2026-03-16 00:50:28 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:28.257848 | orchestrator | 2026-03-16 00:50:28 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:28.257852 | orchestrator | 2026-03-16 00:50:28 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:28.257857 | orchestrator | 2026-03-16 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:31.347133 | orchestrator | 2026-03-16 00:50:31 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:31.347213 | orchestrator | 2026-03-16 00:50:31 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:31.347221 | orchestrator | 2026-03-16 00:50:31 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:31.347822 | orchestrator | 2026-03-16 00:50:31 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:31.348460 | orchestrator | 2026-03-16 00:50:31 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:31.348487 | orchestrator | 2026-03-16 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:34.381073 | orchestrator | 2026-03-16 00:50:34 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:34.382991 | orchestrator | 2026-03-16 00:50:34 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:34.383591 | orchestrator | 2026-03-16 00:50:34 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state STARTED 2026-03-16 00:50:34.383891 | orchestrator | 2026-03-16 00:50:34 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:34.384659 | orchestrator | 2026-03-16 00:50:34 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:34.384701 | orchestrator | 2026-03-16 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:37.458800 | orchestrator | 2026-03-16 00:50:37.458863 | orchestrator | 2026-03-16 00:50:37.458872 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-16 00:50:37.458879 | orchestrator | 2026-03-16 00:50:37.458886 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-16 00:50:37.458893 | orchestrator | Monday 16 March 2026 00:46:13 +0000 (0:00:00.128) 0:00:00.128 ********** 2026-03-16 00:50:37.458901 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:50:37.458908 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:50:37.458915 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:50:37.458922 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.458927 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.458934 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.458940 | orchestrator | 2026-03-16 00:50:37.458946 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-16 00:50:37.458954 | orchestrator | Monday 16 March 2026 00:46:14 +0000 (0:00:00.701) 0:00:00.830 ********** 2026-03-16 00:50:37.458958 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.458963 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.458967 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.458970 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.458974 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.458978 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.458982 | orchestrator | 2026-03-16 00:50:37.458985 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-16 00:50:37.458989 | orchestrator | Monday 16 March 2026 00:46:14 +0000 (0:00:00.579) 0:00:01.409 ********** 2026-03-16 00:50:37.458993 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.458997 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.459000 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.459004 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459008 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459011 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459015 | orchestrator | 2026-03-16 00:50:37.459019 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-16 00:50:37.459023 | orchestrator | Monday 16 March 2026 00:46:15 +0000 (0:00:00.689) 0:00:02.098 ********** 2026-03-16 00:50:37.459026 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:50:37.459030 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:50:37.459034 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:50:37.459037 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.459066 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.459071 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.459090 | orchestrator | 2026-03-16 00:50:37.459094 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-16 00:50:37.459098 | orchestrator | Monday 16 March 2026 00:46:17 +0000 (0:00:02.123) 0:00:04.222 ********** 2026-03-16 00:50:37.459102 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:50:37.459106 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:50:37.459109 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.459113 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.459117 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.459120 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:50:37.459124 | orchestrator | 2026-03-16 00:50:37.459128 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-16 00:50:37.459132 | orchestrator | Monday 16 March 2026 00:46:19 +0000 (0:00:01.805) 0:00:06.027 ********** 2026-03-16 00:50:37.459143 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:50:37.459147 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:50:37.459151 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:50:37.459155 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.459158 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.459162 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.459166 | orchestrator | 2026-03-16 00:50:37.459169 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-16 00:50:37.459173 | orchestrator | Monday 16 March 2026 00:46:21 +0000 (0:00:02.360) 0:00:08.388 ********** 2026-03-16 00:50:37.459177 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.459181 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.459184 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.459188 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459192 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459196 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459200 | orchestrator | 2026-03-16 00:50:37.459203 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-16 00:50:37.459207 | orchestrator | Monday 16 March 2026 00:46:23 +0000 (0:00:01.245) 0:00:09.634 ********** 2026-03-16 00:50:37.459211 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.459215 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.459218 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.459222 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459226 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459229 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459233 | orchestrator | 2026-03-16 00:50:37.459237 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-16 00:50:37.459241 | orchestrator | Monday 16 March 2026 00:46:24 +0000 (0:00:00.995) 0:00:10.629 ********** 2026-03-16 00:50:37.459244 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-16 00:50:37.459248 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-16 00:50:37.459252 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.459256 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-16 00:50:37.459259 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-16 00:50:37.459263 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.459267 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-16 00:50:37.459271 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-16 00:50:37.459274 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.459278 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-16 00:50:37.459291 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-16 00:50:37.459295 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459299 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-16 00:50:37.459307 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-16 00:50:37.459311 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459315 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-16 00:50:37.459319 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-16 00:50:37.459322 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459326 | orchestrator | 2026-03-16 00:50:37.459330 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-16 00:50:37.459334 | orchestrator | Monday 16 March 2026 00:46:24 +0000 (0:00:00.664) 0:00:11.293 ********** 2026-03-16 00:50:37.459388 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.459392 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.459396 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.459400 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459404 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459407 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459411 | orchestrator | 2026-03-16 00:50:37.459415 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-16 00:50:37.459420 | orchestrator | Monday 16 March 2026 00:46:26 +0000 (0:00:01.552) 0:00:12.845 ********** 2026-03-16 00:50:37.459424 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:50:37.459428 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:50:37.459431 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.459435 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.459440 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:50:37.459446 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.459452 | orchestrator | 2026-03-16 00:50:37.459458 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-16 00:50:37.459464 | orchestrator | Monday 16 March 2026 00:46:27 +0000 (0:00:00.958) 0:00:13.804 ********** 2026-03-16 00:50:37.459470 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:50:37.459476 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:50:37.459483 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.459489 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:50:37.459495 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.459501 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.459507 | orchestrator | 2026-03-16 00:50:37.459513 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-16 00:50:37.459520 | orchestrator | Monday 16 March 2026 00:46:34 +0000 (0:00:07.293) 0:00:21.098 ********** 2026-03-16 00:50:37.459526 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.459532 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.459538 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.459544 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459550 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459556 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459562 | orchestrator | 2026-03-16 00:50:37.459568 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-16 00:50:37.459580 | orchestrator | Monday 16 March 2026 00:46:36 +0000 (0:00:01.985) 0:00:23.084 ********** 2026-03-16 00:50:37.459586 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.459592 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.459599 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.459605 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459611 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459617 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459622 | orchestrator | 2026-03-16 00:50:37.459627 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-16 00:50:37.459631 | orchestrator | Monday 16 March 2026 00:46:38 +0000 (0:00:02.225) 0:00:25.309 ********** 2026-03-16 00:50:37.459635 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.459643 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.459647 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.459651 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459655 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459658 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459662 | orchestrator | 2026-03-16 00:50:37.459666 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-16 00:50:37.459670 | orchestrator | Monday 16 March 2026 00:46:40 +0000 (0:00:01.355) 0:00:26.665 ********** 2026-03-16 00:50:37.459673 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-16 00:50:37.459678 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-16 00:50:37.459681 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.459685 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-16 00:50:37.459689 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-16 00:50:37.459692 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.459696 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-16 00:50:37.459700 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-16 00:50:37.459704 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.459707 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-16 00:50:37.459711 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-16 00:50:37.459715 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459718 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-16 00:50:37.459722 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-16 00:50:37.459726 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459729 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-16 00:50:37.459733 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-16 00:50:37.459737 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459741 | orchestrator | 2026-03-16 00:50:37.459744 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-16 00:50:37.459753 | orchestrator | Monday 16 March 2026 00:46:42 +0000 (0:00:02.066) 0:00:28.731 ********** 2026-03-16 00:50:37.459757 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.459761 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.459764 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.459768 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459772 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459776 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459779 | orchestrator | 2026-03-16 00:50:37.459783 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-16 00:50:37.459787 | orchestrator | Monday 16 March 2026 00:46:43 +0000 (0:00:00.830) 0:00:29.562 ********** 2026-03-16 00:50:37.459791 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.459794 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.459798 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.459802 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459806 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459809 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459813 | orchestrator | 2026-03-16 00:50:37.459817 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-16 00:50:37.459821 | orchestrator | 2026-03-16 00:50:37.459825 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-16 00:50:37.459828 | orchestrator | Monday 16 March 2026 00:46:44 +0000 (0:00:01.579) 0:00:31.141 ********** 2026-03-16 00:50:37.459832 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.459836 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.459840 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.459843 | orchestrator | 2026-03-16 00:50:37.459847 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-16 00:50:37.459854 | orchestrator | Monday 16 March 2026 00:46:46 +0000 (0:00:02.219) 0:00:33.361 ********** 2026-03-16 00:50:37.459858 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.459861 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.459865 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.459869 | orchestrator | 2026-03-16 00:50:37.459873 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-16 00:50:37.459877 | orchestrator | Monday 16 March 2026 00:46:48 +0000 (0:00:01.427) 0:00:34.789 ********** 2026-03-16 00:50:37.459880 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.459884 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.459888 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.459891 | orchestrator | 2026-03-16 00:50:37.459895 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-16 00:50:37.459899 | orchestrator | Monday 16 March 2026 00:46:49 +0000 (0:00:01.086) 0:00:35.876 ********** 2026-03-16 00:50:37.459903 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.459906 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.459910 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.459914 | orchestrator | 2026-03-16 00:50:37.459918 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-16 00:50:37.459921 | orchestrator | Monday 16 March 2026 00:46:50 +0000 (0:00:00.733) 0:00:36.609 ********** 2026-03-16 00:50:37.459925 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.459929 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.459933 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.459936 | orchestrator | 2026-03-16 00:50:37.459940 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-16 00:50:37.459944 | orchestrator | Monday 16 March 2026 00:46:50 +0000 (0:00:00.385) 0:00:36.994 ********** 2026-03-16 00:50:37.459948 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.459952 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.459955 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.459959 | orchestrator | 2026-03-16 00:50:37.459963 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-16 00:50:37.459967 | orchestrator | Monday 16 March 2026 00:46:51 +0000 (0:00:01.432) 0:00:38.427 ********** 2026-03-16 00:50:37.460275 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.460284 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.460288 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.460292 | orchestrator | 2026-03-16 00:50:37.460296 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-16 00:50:37.460300 | orchestrator | Monday 16 March 2026 00:46:53 +0000 (0:00:01.517) 0:00:39.944 ********** 2026-03-16 00:50:37.460304 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:50:37.460308 | orchestrator | 2026-03-16 00:50:37.460312 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-16 00:50:37.460315 | orchestrator | Monday 16 March 2026 00:46:54 +0000 (0:00:00.581) 0:00:40.526 ********** 2026-03-16 00:50:37.460319 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.460323 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.460327 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.460331 | orchestrator | 2026-03-16 00:50:37.460334 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-16 00:50:37.460366 | orchestrator | Monday 16 March 2026 00:46:56 +0000 (0:00:02.475) 0:00:43.002 ********** 2026-03-16 00:50:37.460370 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.460373 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.460379 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.460386 | orchestrator | 2026-03-16 00:50:37.460392 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-16 00:50:37.460399 | orchestrator | Monday 16 March 2026 00:46:57 +0000 (0:00:00.653) 0:00:43.655 ********** 2026-03-16 00:50:37.460405 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.460417 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.460423 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.460430 | orchestrator | 2026-03-16 00:50:37.460436 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-16 00:50:37.460443 | orchestrator | Monday 16 March 2026 00:46:58 +0000 (0:00:01.100) 0:00:44.756 ********** 2026-03-16 00:50:37.460449 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.460456 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.460462 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.460469 | orchestrator | 2026-03-16 00:50:37.460476 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-16 00:50:37.460490 | orchestrator | Monday 16 March 2026 00:46:59 +0000 (0:00:01.533) 0:00:46.289 ********** 2026-03-16 00:50:37.460497 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.460503 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.460509 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.460516 | orchestrator | 2026-03-16 00:50:37.460522 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-16 00:50:37.460528 | orchestrator | Monday 16 March 2026 00:47:00 +0000 (0:00:00.909) 0:00:47.198 ********** 2026-03-16 00:50:37.460535 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.460541 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.460548 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.460554 | orchestrator | 2026-03-16 00:50:37.460561 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-16 00:50:37.460568 | orchestrator | Monday 16 March 2026 00:47:01 +0000 (0:00:00.743) 0:00:47.942 ********** 2026-03-16 00:50:37.460575 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.460582 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.460588 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.460595 | orchestrator | 2026-03-16 00:50:37.460602 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-16 00:50:37.460609 | orchestrator | Monday 16 March 2026 00:47:03 +0000 (0:00:01.754) 0:00:49.696 ********** 2026-03-16 00:50:37.460615 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.460621 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.460625 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.460628 | orchestrator | 2026-03-16 00:50:37.460632 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-16 00:50:37.460636 | orchestrator | Monday 16 March 2026 00:47:05 +0000 (0:00:02.413) 0:00:52.110 ********** 2026-03-16 00:50:37.460640 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.460643 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.460648 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.460654 | orchestrator | 2026-03-16 00:50:37.460660 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-16 00:50:37.460667 | orchestrator | Monday 16 March 2026 00:47:06 +0000 (0:00:00.530) 0:00:52.640 ********** 2026-03-16 00:50:37.460678 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-16 00:50:37.460685 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-16 00:50:37.460691 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-16 00:50:37.460698 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-16 00:50:37.460704 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-16 00:50:37.460709 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-16 00:50:37.460722 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-16 00:50:37.460728 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-16 00:50:37.460734 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-16 00:50:37.460740 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-16 00:50:37.460746 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-16 00:50:37.460752 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-16 00:50:37.460758 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.460764 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.460770 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.460776 | orchestrator | 2026-03-16 00:50:37.460782 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-16 00:50:37.460788 | orchestrator | Monday 16 March 2026 00:47:49 +0000 (0:00:43.548) 0:01:36.188 ********** 2026-03-16 00:50:37.460794 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.460800 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.460806 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.460812 | orchestrator | 2026-03-16 00:50:37.460818 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-16 00:50:37.460824 | orchestrator | Monday 16 March 2026 00:47:50 +0000 (0:00:00.413) 0:01:36.602 ********** 2026-03-16 00:50:37.460831 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.460837 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.460844 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.460850 | orchestrator | 2026-03-16 00:50:37.460857 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-16 00:50:37.460863 | orchestrator | Monday 16 March 2026 00:47:51 +0000 (0:00:01.162) 0:01:37.765 ********** 2026-03-16 00:50:37.460869 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.460875 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.460881 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.460888 | orchestrator | 2026-03-16 00:50:37.460900 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-16 00:50:37.460906 | orchestrator | Monday 16 March 2026 00:47:53 +0000 (0:00:01.980) 0:01:39.745 ********** 2026-03-16 00:50:37.460913 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.460919 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.460925 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.460931 | orchestrator | 2026-03-16 00:50:37.460937 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-16 00:50:37.460944 | orchestrator | Monday 16 March 2026 00:48:17 +0000 (0:00:24.147) 0:02:03.893 ********** 2026-03-16 00:50:37.460950 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.460956 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.460963 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.460969 | orchestrator | 2026-03-16 00:50:37.460976 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-16 00:50:37.460982 | orchestrator | Monday 16 March 2026 00:48:18 +0000 (0:00:00.659) 0:02:04.553 ********** 2026-03-16 00:50:37.460989 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.460996 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.461002 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.461009 | orchestrator | 2026-03-16 00:50:37.461016 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-16 00:50:37.461022 | orchestrator | Monday 16 March 2026 00:48:18 +0000 (0:00:00.635) 0:02:05.188 ********** 2026-03-16 00:50:37.461035 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.461042 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.461049 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.461055 | orchestrator | 2026-03-16 00:50:37.461061 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-16 00:50:37.461068 | orchestrator | Monday 16 March 2026 00:48:19 +0000 (0:00:00.637) 0:02:05.825 ********** 2026-03-16 00:50:37.461074 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.461080 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.461087 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.461093 | orchestrator | 2026-03-16 00:50:37.461100 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-16 00:50:37.461107 | orchestrator | Monday 16 March 2026 00:48:20 +0000 (0:00:01.198) 0:02:07.024 ********** 2026-03-16 00:50:37.461114 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.461124 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.461131 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.461138 | orchestrator | 2026-03-16 00:50:37.461144 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-16 00:50:37.461151 | orchestrator | Monday 16 March 2026 00:48:20 +0000 (0:00:00.307) 0:02:07.332 ********** 2026-03-16 00:50:37.461157 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.461164 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.461171 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.461178 | orchestrator | 2026-03-16 00:50:37.461185 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-16 00:50:37.461192 | orchestrator | Monday 16 March 2026 00:48:21 +0000 (0:00:00.673) 0:02:08.005 ********** 2026-03-16 00:50:37.461199 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.461205 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.461212 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.461219 | orchestrator | 2026-03-16 00:50:37.461225 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-16 00:50:37.461232 | orchestrator | Monday 16 March 2026 00:48:22 +0000 (0:00:00.651) 0:02:08.656 ********** 2026-03-16 00:50:37.461239 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.461245 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.461252 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.461258 | orchestrator | 2026-03-16 00:50:37.461265 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-16 00:50:37.461272 | orchestrator | Monday 16 March 2026 00:48:23 +0000 (0:00:01.353) 0:02:10.010 ********** 2026-03-16 00:50:37.461279 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:50:37.461286 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:50:37.461293 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:50:37.461300 | orchestrator | 2026-03-16 00:50:37.461307 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-16 00:50:37.461314 | orchestrator | Monday 16 March 2026 00:48:24 +0000 (0:00:01.028) 0:02:11.038 ********** 2026-03-16 00:50:37.461320 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.461327 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.461334 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.461361 | orchestrator | 2026-03-16 00:50:37.461367 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-16 00:50:37.461373 | orchestrator | Monday 16 March 2026 00:48:24 +0000 (0:00:00.347) 0:02:11.386 ********** 2026-03-16 00:50:37.461379 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.461385 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.461392 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.461399 | orchestrator | 2026-03-16 00:50:37.461405 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-16 00:50:37.461412 | orchestrator | Monday 16 March 2026 00:48:25 +0000 (0:00:00.309) 0:02:11.696 ********** 2026-03-16 00:50:37.461424 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.461431 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.461438 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.461445 | orchestrator | 2026-03-16 00:50:37.461452 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-16 00:50:37.461459 | orchestrator | Monday 16 March 2026 00:48:25 +0000 (0:00:00.820) 0:02:12.517 ********** 2026-03-16 00:50:37.461466 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.461473 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.461479 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.461486 | orchestrator | 2026-03-16 00:50:37.461493 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-16 00:50:37.461500 | orchestrator | Monday 16 March 2026 00:48:26 +0000 (0:00:00.636) 0:02:13.153 ********** 2026-03-16 00:50:37.461507 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-16 00:50:37.461519 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-16 00:50:37.461526 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-16 00:50:37.461533 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-16 00:50:37.461539 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-16 00:50:37.461545 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-16 00:50:37.461551 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-16 00:50:37.461558 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-16 00:50:37.461564 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-16 00:50:37.461570 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-16 00:50:37.461576 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-16 00:50:37.461582 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-16 00:50:37.461588 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-16 00:50:37.461594 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-16 00:50:37.461600 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-16 00:50:37.461607 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-16 00:50:37.461617 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-16 00:50:37.461623 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-16 00:50:37.461630 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-16 00:50:37.461636 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-16 00:50:37.461643 | orchestrator | 2026-03-16 00:50:37.461649 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-16 00:50:37.461654 | orchestrator | 2026-03-16 00:50:37.461661 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-16 00:50:37.461667 | orchestrator | Monday 16 March 2026 00:48:30 +0000 (0:00:03.512) 0:02:16.665 ********** 2026-03-16 00:50:37.461673 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:50:37.461679 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:50:37.461685 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:50:37.461692 | orchestrator | 2026-03-16 00:50:37.461703 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-16 00:50:37.461709 | orchestrator | Monday 16 March 2026 00:48:30 +0000 (0:00:00.684) 0:02:17.350 ********** 2026-03-16 00:50:37.461714 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:50:37.461717 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:50:37.461721 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:50:37.461725 | orchestrator | 2026-03-16 00:50:37.461728 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-16 00:50:37.461732 | orchestrator | Monday 16 March 2026 00:48:31 +0000 (0:00:00.556) 0:02:17.906 ********** 2026-03-16 00:50:37.461736 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:50:37.461740 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:50:37.461743 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:50:37.461747 | orchestrator | 2026-03-16 00:50:37.461751 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-16 00:50:37.461755 | orchestrator | Monday 16 March 2026 00:48:31 +0000 (0:00:00.296) 0:02:18.202 ********** 2026-03-16 00:50:37.461758 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:50:37.461762 | orchestrator | 2026-03-16 00:50:37.461766 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-16 00:50:37.461770 | orchestrator | Monday 16 March 2026 00:48:32 +0000 (0:00:00.513) 0:02:18.715 ********** 2026-03-16 00:50:37.461774 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.461777 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.461781 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.461785 | orchestrator | 2026-03-16 00:50:37.461789 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-16 00:50:37.461792 | orchestrator | Monday 16 March 2026 00:48:32 +0000 (0:00:00.232) 0:02:18.947 ********** 2026-03-16 00:50:37.461796 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.461800 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.461803 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.461807 | orchestrator | 2026-03-16 00:50:37.461811 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-16 00:50:37.461818 | orchestrator | Monday 16 March 2026 00:48:32 +0000 (0:00:00.241) 0:02:19.188 ********** 2026-03-16 00:50:37.461824 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.461829 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.461835 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.461841 | orchestrator | 2026-03-16 00:50:37.461847 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-16 00:50:37.461853 | orchestrator | Monday 16 March 2026 00:48:32 +0000 (0:00:00.256) 0:02:19.445 ********** 2026-03-16 00:50:37.461860 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:50:37.461866 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:50:37.461872 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:50:37.461879 | orchestrator | 2026-03-16 00:50:37.461891 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-16 00:50:37.461898 | orchestrator | Monday 16 March 2026 00:48:33 +0000 (0:00:00.756) 0:02:20.201 ********** 2026-03-16 00:50:37.461904 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:50:37.461910 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:50:37.461917 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:50:37.461923 | orchestrator | 2026-03-16 00:50:37.461929 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-16 00:50:37.461935 | orchestrator | Monday 16 March 2026 00:48:34 +0000 (0:00:01.131) 0:02:21.332 ********** 2026-03-16 00:50:37.461939 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:50:37.461943 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:50:37.461947 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:50:37.461950 | orchestrator | 2026-03-16 00:50:37.461954 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-16 00:50:37.461958 | orchestrator | Monday 16 March 2026 00:48:36 +0000 (0:00:01.322) 0:02:22.655 ********** 2026-03-16 00:50:37.461966 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:50:37.461970 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:50:37.461973 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:50:37.461977 | orchestrator | 2026-03-16 00:50:37.461981 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-16 00:50:37.461984 | orchestrator | 2026-03-16 00:50:37.461988 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-16 00:50:37.461992 | orchestrator | Monday 16 March 2026 00:48:46 +0000 (0:00:10.315) 0:02:32.971 ********** 2026-03-16 00:50:37.461995 | orchestrator | ok: [testbed-manager] 2026-03-16 00:50:37.461999 | orchestrator | 2026-03-16 00:50:37.462003 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-16 00:50:37.462007 | orchestrator | Monday 16 March 2026 00:48:47 +0000 (0:00:00.697) 0:02:33.668 ********** 2026-03-16 00:50:37.462010 | orchestrator | changed: [testbed-manager] 2026-03-16 00:50:37.462067 | orchestrator | 2026-03-16 00:50:37.462078 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-16 00:50:37.462084 | orchestrator | Monday 16 March 2026 00:48:47 +0000 (0:00:00.465) 0:02:34.134 ********** 2026-03-16 00:50:37.462099 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-16 00:50:37.462107 | orchestrator | 2026-03-16 00:50:37.462114 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-16 00:50:37.462120 | orchestrator | Monday 16 March 2026 00:48:48 +0000 (0:00:00.528) 0:02:34.662 ********** 2026-03-16 00:50:37.462126 | orchestrator | changed: [testbed-manager] 2026-03-16 00:50:37.462133 | orchestrator | 2026-03-16 00:50:37.462139 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-16 00:50:37.462146 | orchestrator | Monday 16 March 2026 00:48:48 +0000 (0:00:00.729) 0:02:35.392 ********** 2026-03-16 00:50:37.462153 | orchestrator | changed: [testbed-manager] 2026-03-16 00:50:37.462160 | orchestrator | 2026-03-16 00:50:37.462167 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-16 00:50:37.462174 | orchestrator | Monday 16 March 2026 00:48:49 +0000 (0:00:00.523) 0:02:35.916 ********** 2026-03-16 00:50:37.462181 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-16 00:50:37.462187 | orchestrator | 2026-03-16 00:50:37.462194 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-16 00:50:37.462201 | orchestrator | Monday 16 March 2026 00:48:50 +0000 (0:00:01.501) 0:02:37.417 ********** 2026-03-16 00:50:37.462207 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-16 00:50:37.462214 | orchestrator | 2026-03-16 00:50:37.462221 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-16 00:50:37.462228 | orchestrator | Monday 16 March 2026 00:48:51 +0000 (0:00:00.902) 0:02:38.319 ********** 2026-03-16 00:50:37.462234 | orchestrator | changed: [testbed-manager] 2026-03-16 00:50:37.462241 | orchestrator | 2026-03-16 00:50:37.462248 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-16 00:50:37.462254 | orchestrator | Monday 16 March 2026 00:48:52 +0000 (0:00:00.819) 0:02:39.139 ********** 2026-03-16 00:50:37.462261 | orchestrator | changed: [testbed-manager] 2026-03-16 00:50:37.462268 | orchestrator | 2026-03-16 00:50:37.462273 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-16 00:50:37.462277 | orchestrator | 2026-03-16 00:50:37.462281 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-16 00:50:37.462285 | orchestrator | Monday 16 March 2026 00:48:53 +0000 (0:00:00.594) 0:02:39.733 ********** 2026-03-16 00:50:37.462288 | orchestrator | ok: [testbed-manager] 2026-03-16 00:50:37.462292 | orchestrator | 2026-03-16 00:50:37.462296 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-16 00:50:37.462299 | orchestrator | Monday 16 March 2026 00:48:53 +0000 (0:00:00.147) 0:02:39.881 ********** 2026-03-16 00:50:37.462303 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-16 00:50:37.462314 | orchestrator | 2026-03-16 00:50:37.462318 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-16 00:50:37.462322 | orchestrator | Monday 16 March 2026 00:48:53 +0000 (0:00:00.195) 0:02:40.076 ********** 2026-03-16 00:50:37.462325 | orchestrator | ok: [testbed-manager] 2026-03-16 00:50:37.462329 | orchestrator | 2026-03-16 00:50:37.462333 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-16 00:50:37.462373 | orchestrator | Monday 16 March 2026 00:48:54 +0000 (0:00:01.052) 0:02:41.129 ********** 2026-03-16 00:50:37.462378 | orchestrator | ok: [testbed-manager] 2026-03-16 00:50:37.462382 | orchestrator | 2026-03-16 00:50:37.462385 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-16 00:50:37.462389 | orchestrator | Monday 16 March 2026 00:48:56 +0000 (0:00:01.474) 0:02:42.603 ********** 2026-03-16 00:50:37.462393 | orchestrator | changed: [testbed-manager] 2026-03-16 00:50:37.462396 | orchestrator | 2026-03-16 00:50:37.462400 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-16 00:50:37.462404 | orchestrator | Monday 16 March 2026 00:48:56 +0000 (0:00:00.737) 0:02:43.341 ********** 2026-03-16 00:50:37.462408 | orchestrator | ok: [testbed-manager] 2026-03-16 00:50:37.462411 | orchestrator | 2026-03-16 00:50:37.462420 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-16 00:50:37.462424 | orchestrator | Monday 16 March 2026 00:48:57 +0000 (0:00:00.417) 0:02:43.758 ********** 2026-03-16 00:50:37.462428 | orchestrator | changed: [testbed-manager] 2026-03-16 00:50:37.462431 | orchestrator | 2026-03-16 00:50:37.462435 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-16 00:50:37.462439 | orchestrator | Monday 16 March 2026 00:49:04 +0000 (0:00:06.976) 0:02:50.735 ********** 2026-03-16 00:50:37.462443 | orchestrator | changed: [testbed-manager] 2026-03-16 00:50:37.462447 | orchestrator | 2026-03-16 00:50:37.462450 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-16 00:50:37.462454 | orchestrator | Monday 16 March 2026 00:49:18 +0000 (0:00:14.203) 0:03:04.939 ********** 2026-03-16 00:50:37.462458 | orchestrator | ok: [testbed-manager] 2026-03-16 00:50:37.462461 | orchestrator | 2026-03-16 00:50:37.462465 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-16 00:50:37.462469 | orchestrator | 2026-03-16 00:50:37.462473 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-16 00:50:37.462477 | orchestrator | Monday 16 March 2026 00:49:19 +0000 (0:00:00.718) 0:03:05.657 ********** 2026-03-16 00:50:37.462480 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.462484 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.462488 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.462491 | orchestrator | 2026-03-16 00:50:37.462495 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-16 00:50:37.462499 | orchestrator | Monday 16 March 2026 00:49:19 +0000 (0:00:00.294) 0:03:05.952 ********** 2026-03-16 00:50:37.462503 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.462506 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.462510 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.462514 | orchestrator | 2026-03-16 00:50:37.462517 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-16 00:50:37.462521 | orchestrator | Monday 16 March 2026 00:49:19 +0000 (0:00:00.429) 0:03:06.382 ********** 2026-03-16 00:50:37.462525 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:50:37.462529 | orchestrator | 2026-03-16 00:50:37.462536 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-16 00:50:37.462540 | orchestrator | Monday 16 March 2026 00:49:20 +0000 (0:00:00.688) 0:03:07.070 ********** 2026-03-16 00:50:37.462543 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-16 00:50:37.462547 | orchestrator | 2026-03-16 00:50:37.462551 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-16 00:50:37.462558 | orchestrator | Monday 16 March 2026 00:49:21 +0000 (0:00:00.776) 0:03:07.847 ********** 2026-03-16 00:50:37.462562 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 00:50:37.462566 | orchestrator | 2026-03-16 00:50:37.462570 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-16 00:50:37.462574 | orchestrator | Monday 16 March 2026 00:49:22 +0000 (0:00:00.800) 0:03:08.648 ********** 2026-03-16 00:50:37.462578 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.462581 | orchestrator | 2026-03-16 00:50:37.462585 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-16 00:50:37.462589 | orchestrator | Monday 16 March 2026 00:49:22 +0000 (0:00:00.132) 0:03:08.780 ********** 2026-03-16 00:50:37.462593 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 00:50:37.462596 | orchestrator | 2026-03-16 00:50:37.462600 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-16 00:50:37.462604 | orchestrator | Monday 16 March 2026 00:49:23 +0000 (0:00:00.975) 0:03:09.756 ********** 2026-03-16 00:50:37.462608 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.462611 | orchestrator | 2026-03-16 00:50:37.462615 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-16 00:50:37.462619 | orchestrator | Monday 16 March 2026 00:49:23 +0000 (0:00:00.150) 0:03:09.907 ********** 2026-03-16 00:50:37.462623 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.462626 | orchestrator | 2026-03-16 00:50:37.462630 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-16 00:50:37.462634 | orchestrator | Monday 16 March 2026 00:49:23 +0000 (0:00:00.089) 0:03:09.996 ********** 2026-03-16 00:50:37.462637 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.462641 | orchestrator | 2026-03-16 00:50:37.462645 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-16 00:50:37.462649 | orchestrator | Monday 16 March 2026 00:49:23 +0000 (0:00:00.131) 0:03:10.128 ********** 2026-03-16 00:50:37.462652 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.462656 | orchestrator | 2026-03-16 00:50:37.462660 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-16 00:50:37.462664 | orchestrator | Monday 16 March 2026 00:49:23 +0000 (0:00:00.109) 0:03:10.238 ********** 2026-03-16 00:50:37.462667 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-16 00:50:37.462671 | orchestrator | 2026-03-16 00:50:37.462675 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-16 00:50:37.462679 | orchestrator | Monday 16 March 2026 00:49:28 +0000 (0:00:04.649) 0:03:14.887 ********** 2026-03-16 00:50:37.462682 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-16 00:50:37.462686 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-16 00:50:37.462690 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-16 00:50:37.462694 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-16 00:50:37.462697 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-16 00:50:37.462701 | orchestrator | 2026-03-16 00:50:37.462705 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-16 00:50:37.462709 | orchestrator | Monday 16 March 2026 00:50:10 +0000 (0:00:41.794) 0:03:56.682 ********** 2026-03-16 00:50:37.462716 | orchestrator | ok: [2026-03-16 00:50:37 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:37.462720 | orchestrator | 2026-03-16 00:50:37 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:37.462724 | orchestrator | 2026-03-16 00:50:37 | INFO  | Task ba6c4e3d-5fad-473a-89b6-c43c88501959 is in state SUCCESS 2026-03-16 00:50:37.462728 | orchestrator | 2026-03-16 00:50:37 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:37.462735 | orchestrator | 2026-03-16 00:50:37 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:37.462739 | orchestrator | 2026-03-16 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:37.462742 | orchestrator | testbed-node-0 -> localhost] 2026-03-16 00:50:37.462746 | orchestrator | 2026-03-16 00:50:37.462750 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-16 00:50:37.462754 | orchestrator | Monday 16 March 2026 00:50:11 +0000 (0:00:01.144) 0:03:57.826 ********** 2026-03-16 00:50:37.462757 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-16 00:50:37.462761 | orchestrator | 2026-03-16 00:50:37.462765 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-16 00:50:37.462769 | orchestrator | Monday 16 March 2026 00:50:12 +0000 (0:00:01.304) 0:03:59.131 ********** 2026-03-16 00:50:37.462773 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-16 00:50:37.462776 | orchestrator | 2026-03-16 00:50:37.462780 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-16 00:50:37.462784 | orchestrator | Monday 16 March 2026 00:50:13 +0000 (0:00:00.996) 0:04:00.128 ********** 2026-03-16 00:50:37.462788 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.462792 | orchestrator | 2026-03-16 00:50:37.462795 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-16 00:50:37.462801 | orchestrator | Monday 16 March 2026 00:50:13 +0000 (0:00:00.126) 0:04:00.254 ********** 2026-03-16 00:50:37.462805 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-16 00:50:37.462809 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-16 00:50:37.462813 | orchestrator | 2026-03-16 00:50:37.462816 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-16 00:50:37.462820 | orchestrator | Monday 16 March 2026 00:50:15 +0000 (0:00:02.171) 0:04:02.426 ********** 2026-03-16 00:50:37.462824 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.462828 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.462831 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.462835 | orchestrator | 2026-03-16 00:50:37.462839 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-16 00:50:37.462843 | orchestrator | Monday 16 March 2026 00:50:16 +0000 (0:00:00.425) 0:04:02.852 ********** 2026-03-16 00:50:37.462847 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.462851 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.462854 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.462858 | orchestrator | 2026-03-16 00:50:37.462862 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-16 00:50:37.462866 | orchestrator | 2026-03-16 00:50:37.462869 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-16 00:50:37.462873 | orchestrator | Monday 16 March 2026 00:50:17 +0000 (0:00:01.216) 0:04:04.069 ********** 2026-03-16 00:50:37.462877 | orchestrator | ok: [testbed-manager] 2026-03-16 00:50:37.462881 | orchestrator | 2026-03-16 00:50:37.462884 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-16 00:50:37.462888 | orchestrator | Monday 16 March 2026 00:50:17 +0000 (0:00:00.173) 0:04:04.242 ********** 2026-03-16 00:50:37.462892 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-16 00:50:37.462896 | orchestrator | 2026-03-16 00:50:37.462900 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-16 00:50:37.462903 | orchestrator | Monday 16 March 2026 00:50:18 +0000 (0:00:00.285) 0:04:04.528 ********** 2026-03-16 00:50:37.462907 | orchestrator | changed: [testbed-manager] 2026-03-16 00:50:37.462911 | orchestrator | 2026-03-16 00:50:37.462915 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-16 00:50:37.462918 | orchestrator | 2026-03-16 00:50:37.462922 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-16 00:50:37.462929 | orchestrator | Monday 16 March 2026 00:50:24 +0000 (0:00:06.709) 0:04:11.238 ********** 2026-03-16 00:50:37.462932 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:50:37.462936 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:50:37.462940 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:50:37.462944 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:50:37.462947 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:50:37.462951 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:50:37.462955 | orchestrator | 2026-03-16 00:50:37.462959 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-16 00:50:37.462962 | orchestrator | Monday 16 March 2026 00:50:25 +0000 (0:00:00.711) 0:04:11.949 ********** 2026-03-16 00:50:37.462966 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-16 00:50:37.462970 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-16 00:50:37.462974 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-16 00:50:37.462978 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-16 00:50:37.462981 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-16 00:50:37.462988 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-16 00:50:37.462992 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-16 00:50:37.462996 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-16 00:50:37.462999 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-16 00:50:37.463003 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-16 00:50:37.463007 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-16 00:50:37.463010 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-16 00:50:37.463014 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-16 00:50:37.463018 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-16 00:50:37.463022 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-16 00:50:37.463025 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-16 00:50:37.463029 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-16 00:50:37.463033 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-16 00:50:37.463037 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-16 00:50:37.463040 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-16 00:50:37.463044 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-16 00:50:37.463050 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-16 00:50:37.463054 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-16 00:50:37.463061 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-16 00:50:37.463068 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-16 00:50:37.463074 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-16 00:50:37.463081 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-16 00:50:37.463087 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-16 00:50:37.463099 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-16 00:50:37.463106 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-16 00:50:37.463113 | orchestrator | 2026-03-16 00:50:37.463120 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-16 00:50:37.463126 | orchestrator | Monday 16 March 2026 00:50:35 +0000 (0:00:10.245) 0:04:22.195 ********** 2026-03-16 00:50:37.463133 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.463137 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.463141 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.463145 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.463149 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.463152 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.463156 | orchestrator | 2026-03-16 00:50:37.463160 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-16 00:50:37.463164 | orchestrator | Monday 16 March 2026 00:50:36 +0000 (0:00:00.745) 0:04:22.941 ********** 2026-03-16 00:50:37.463167 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:50:37.463171 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:50:37.463175 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:50:37.463178 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:50:37.463182 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:50:37.463186 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:50:37.463189 | orchestrator | 2026-03-16 00:50:37.463193 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:50:37.463197 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:50:37.463201 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-16 00:50:37.463205 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-16 00:50:37.463209 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-16 00:50:37.463213 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-16 00:50:37.463217 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-16 00:50:37.463223 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-16 00:50:37.463227 | orchestrator | 2026-03-16 00:50:37.463231 | orchestrator | 2026-03-16 00:50:37.463235 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:50:37.463239 | orchestrator | Monday 16 March 2026 00:50:36 +0000 (0:00:00.484) 0:04:23.427 ********** 2026-03-16 00:50:37.463242 | orchestrator | =============================================================================== 2026-03-16 00:50:37.463246 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.55s 2026-03-16 00:50:37.463250 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 41.79s 2026-03-16 00:50:37.463254 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.15s 2026-03-16 00:50:37.463257 | orchestrator | kubectl : Install required packages ------------------------------------ 14.20s 2026-03-16 00:50:37.463261 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.32s 2026-03-16 00:50:37.463265 | orchestrator | Manage labels ---------------------------------------------------------- 10.25s 2026-03-16 00:50:37.463271 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.29s 2026-03-16 00:50:37.463275 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.98s 2026-03-16 00:50:37.463279 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.71s 2026-03-16 00:50:37.463283 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.65s 2026-03-16 00:50:37.463286 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.51s 2026-03-16 00:50:37.463290 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.48s 2026-03-16 00:50:37.463294 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.41s 2026-03-16 00:50:37.463300 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.36s 2026-03-16 00:50:37.463304 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.23s 2026-03-16 00:50:37.463308 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.22s 2026-03-16 00:50:37.463311 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.17s 2026-03-16 00:50:37.463315 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.12s 2026-03-16 00:50:37.463319 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.07s 2026-03-16 00:50:37.463322 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.99s 2026-03-16 00:50:40.459073 | orchestrator | 2026-03-16 00:50:40 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:40.459158 | orchestrator | 2026-03-16 00:50:40 | INFO  | Task f5a946c9-b4f6-4f23-9c52-055c88ae2bf5 is in state STARTED 2026-03-16 00:50:40.459167 | orchestrator | 2026-03-16 00:50:40 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:40.460906 | orchestrator | 2026-03-16 00:50:40 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:40.461623 | orchestrator | 2026-03-16 00:50:40 | INFO  | Task 8d36c537-425f-4831-b37b-e20e46a10fc2 is in state STARTED 2026-03-16 00:50:40.463625 | orchestrator | 2026-03-16 00:50:40 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:40.463667 | orchestrator | 2026-03-16 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:43.514913 | orchestrator | 2026-03-16 00:50:43 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:43.515274 | orchestrator | 2026-03-16 00:50:43 | INFO  | Task f5a946c9-b4f6-4f23-9c52-055c88ae2bf5 is in state STARTED 2026-03-16 00:50:43.515688 | orchestrator | 2026-03-16 00:50:43 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:43.518545 | orchestrator | 2026-03-16 00:50:43 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:43.519183 | orchestrator | 2026-03-16 00:50:43 | INFO  | Task 8d36c537-425f-4831-b37b-e20e46a10fc2 is in state STARTED 2026-03-16 00:50:43.520071 | orchestrator | 2026-03-16 00:50:43 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:43.520113 | orchestrator | 2026-03-16 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:46.562846 | orchestrator | 2026-03-16 00:50:46 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:46.563102 | orchestrator | 2026-03-16 00:50:46 | INFO  | Task f5a946c9-b4f6-4f23-9c52-055c88ae2bf5 is in state STARTED 2026-03-16 00:50:46.567665 | orchestrator | 2026-03-16 00:50:46 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:46.568407 | orchestrator | 2026-03-16 00:50:46 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:46.570972 | orchestrator | 2026-03-16 00:50:46 | INFO  | Task 8d36c537-425f-4831-b37b-e20e46a10fc2 is in state SUCCESS 2026-03-16 00:50:46.571954 | orchestrator | 2026-03-16 00:50:46 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:46.572013 | orchestrator | 2026-03-16 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:49.611441 | orchestrator | 2026-03-16 00:50:49 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:49.612125 | orchestrator | 2026-03-16 00:50:49 | INFO  | Task f5a946c9-b4f6-4f23-9c52-055c88ae2bf5 is in state SUCCESS 2026-03-16 00:50:49.612676 | orchestrator | 2026-03-16 00:50:49 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:49.614785 | orchestrator | 2026-03-16 00:50:49 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:49.615656 | orchestrator | 2026-03-16 00:50:49 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:49.615688 | orchestrator | 2026-03-16 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:52.656099 | orchestrator | 2026-03-16 00:50:52 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:52.656391 | orchestrator | 2026-03-16 00:50:52 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:52.657949 | orchestrator | 2026-03-16 00:50:52 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:52.659552 | orchestrator | 2026-03-16 00:50:52 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:52.659585 | orchestrator | 2026-03-16 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:55.693766 | orchestrator | 2026-03-16 00:50:55 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:55.694207 | orchestrator | 2026-03-16 00:50:55 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:55.695384 | orchestrator | 2026-03-16 00:50:55 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:55.697900 | orchestrator | 2026-03-16 00:50:55 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:55.697959 | orchestrator | 2026-03-16 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:50:58.726416 | orchestrator | 2026-03-16 00:50:58 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:50:58.726923 | orchestrator | 2026-03-16 00:50:58 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:50:58.727479 | orchestrator | 2026-03-16 00:50:58 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:50:58.728465 | orchestrator | 2026-03-16 00:50:58 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:50:58.728499 | orchestrator | 2026-03-16 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:01.760268 | orchestrator | 2026-03-16 00:51:01 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:01.760460 | orchestrator | 2026-03-16 00:51:01 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:01.761193 | orchestrator | 2026-03-16 00:51:01 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:01.761830 | orchestrator | 2026-03-16 00:51:01 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:51:01.761862 | orchestrator | 2026-03-16 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:04.795802 | orchestrator | 2026-03-16 00:51:04 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:04.796224 | orchestrator | 2026-03-16 00:51:04 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:04.797071 | orchestrator | 2026-03-16 00:51:04 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:04.797802 | orchestrator | 2026-03-16 00:51:04 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:51:04.797859 | orchestrator | 2026-03-16 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:07.845557 | orchestrator | 2026-03-16 00:51:07 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:07.845818 | orchestrator | 2026-03-16 00:51:07 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:07.848412 | orchestrator | 2026-03-16 00:51:07 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:07.850242 | orchestrator | 2026-03-16 00:51:07 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state STARTED 2026-03-16 00:51:07.850454 | orchestrator | 2026-03-16 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:10.894989 | orchestrator | 2026-03-16 00:51:10.895055 | orchestrator | 2026-03-16 00:51:10.895060 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-16 00:51:10.895065 | orchestrator | 2026-03-16 00:51:10.895069 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-16 00:51:10.895074 | orchestrator | Monday 16 March 2026 00:50:41 +0000 (0:00:00.191) 0:00:00.191 ********** 2026-03-16 00:51:10.895079 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-16 00:51:10.895083 | orchestrator | 2026-03-16 00:51:10.895087 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-16 00:51:10.895091 | orchestrator | Monday 16 March 2026 00:50:42 +0000 (0:00:00.757) 0:00:00.949 ********** 2026-03-16 00:51:10.895095 | orchestrator | changed: [testbed-manager] 2026-03-16 00:51:10.895099 | orchestrator | 2026-03-16 00:51:10.895103 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-16 00:51:10.895107 | orchestrator | Monday 16 March 2026 00:50:43 +0000 (0:00:01.124) 0:00:02.073 ********** 2026-03-16 00:51:10.895110 | orchestrator | changed: [testbed-manager] 2026-03-16 00:51:10.895114 | orchestrator | 2026-03-16 00:51:10.895118 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:51:10.895122 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:51:10.895127 | orchestrator | 2026-03-16 00:51:10.895131 | orchestrator | 2026-03-16 00:51:10.895134 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:51:10.895143 | orchestrator | Monday 16 March 2026 00:50:43 +0000 (0:00:00.436) 0:00:02.509 ********** 2026-03-16 00:51:10.895147 | orchestrator | =============================================================================== 2026-03-16 00:51:10.895151 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.12s 2026-03-16 00:51:10.895155 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.76s 2026-03-16 00:51:10.895158 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.44s 2026-03-16 00:51:10.895162 | orchestrator | 2026-03-16 00:51:10.895166 | orchestrator | 2026-03-16 00:51:10.895170 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-16 00:51:10.895174 | orchestrator | 2026-03-16 00:51:10.895191 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-16 00:51:10.895195 | orchestrator | Monday 16 March 2026 00:50:41 +0000 (0:00:00.163) 0:00:00.163 ********** 2026-03-16 00:51:10.895199 | orchestrator | ok: [testbed-manager] 2026-03-16 00:51:10.895203 | orchestrator | 2026-03-16 00:51:10.895207 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-16 00:51:10.895211 | orchestrator | Monday 16 March 2026 00:50:42 +0000 (0:00:00.730) 0:00:00.893 ********** 2026-03-16 00:51:10.895214 | orchestrator | ok: [testbed-manager] 2026-03-16 00:51:10.895218 | orchestrator | 2026-03-16 00:51:10.895222 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-16 00:51:10.895226 | orchestrator | Monday 16 March 2026 00:50:42 +0000 (0:00:00.600) 0:00:01.494 ********** 2026-03-16 00:51:10.895229 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-16 00:51:10.895233 | orchestrator | 2026-03-16 00:51:10.895237 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-16 00:51:10.895240 | orchestrator | Monday 16 March 2026 00:50:43 +0000 (0:00:00.716) 0:00:02.210 ********** 2026-03-16 00:51:10.895244 | orchestrator | changed: [testbed-manager] 2026-03-16 00:51:10.895248 | orchestrator | 2026-03-16 00:51:10.895251 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-16 00:51:10.895255 | orchestrator | Monday 16 March 2026 00:50:44 +0000 (0:00:01.374) 0:00:03.585 ********** 2026-03-16 00:51:10.895259 | orchestrator | changed: [testbed-manager] 2026-03-16 00:51:10.895263 | orchestrator | 2026-03-16 00:51:10.895266 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-16 00:51:10.895295 | orchestrator | Monday 16 March 2026 00:50:45 +0000 (0:00:00.599) 0:00:04.184 ********** 2026-03-16 00:51:10.895300 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-16 00:51:10.895303 | orchestrator | 2026-03-16 00:51:10.895307 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-16 00:51:10.895311 | orchestrator | Monday 16 March 2026 00:50:47 +0000 (0:00:01.615) 0:00:05.799 ********** 2026-03-16 00:51:10.895314 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-16 00:51:10.895318 | orchestrator | 2026-03-16 00:51:10.895322 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-16 00:51:10.895325 | orchestrator | Monday 16 March 2026 00:50:47 +0000 (0:00:00.909) 0:00:06.709 ********** 2026-03-16 00:51:10.895329 | orchestrator | ok: [testbed-manager] 2026-03-16 00:51:10.895333 | orchestrator | 2026-03-16 00:51:10.895337 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-16 00:51:10.895341 | orchestrator | Monday 16 March 2026 00:50:48 +0000 (0:00:00.525) 0:00:07.235 ********** 2026-03-16 00:51:10.895344 | orchestrator | ok: [testbed-manager] 2026-03-16 00:51:10.895348 | orchestrator | 2026-03-16 00:51:10.895352 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:51:10.895355 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:51:10.895359 | orchestrator | 2026-03-16 00:51:10.895363 | orchestrator | 2026-03-16 00:51:10.895367 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:51:10.895370 | orchestrator | Monday 16 March 2026 00:50:48 +0000 (0:00:00.349) 0:00:07.584 ********** 2026-03-16 00:51:10.895374 | orchestrator | =============================================================================== 2026-03-16 00:51:10.895378 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.62s 2026-03-16 00:51:10.895381 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.37s 2026-03-16 00:51:10.895385 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.91s 2026-03-16 00:51:10.895397 | orchestrator | Get home directory of operator user ------------------------------------- 0.73s 2026-03-16 00:51:10.895402 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2026-03-16 00:51:10.895411 | orchestrator | Create .kube directory -------------------------------------------------- 0.60s 2026-03-16 00:51:10.895414 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.60s 2026-03-16 00:51:10.895418 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.53s 2026-03-16 00:51:10.895422 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2026-03-16 00:51:10.895426 | orchestrator | 2026-03-16 00:51:10.895429 | orchestrator | 2026-03-16 00:51:10.895433 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-16 00:51:10.895437 | orchestrator | 2026-03-16 00:51:10.895440 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-16 00:51:10.895444 | orchestrator | Monday 16 March 2026 00:48:46 +0000 (0:00:00.177) 0:00:00.177 ********** 2026-03-16 00:51:10.895448 | orchestrator | ok: [localhost] => { 2026-03-16 00:51:10.895453 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-16 00:51:10.895457 | orchestrator | } 2026-03-16 00:51:10.895461 | orchestrator | 2026-03-16 00:51:10.895464 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-16 00:51:10.895470 | orchestrator | Monday 16 March 2026 00:48:47 +0000 (0:00:00.079) 0:00:00.257 ********** 2026-03-16 00:51:10.895475 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-16 00:51:10.895480 | orchestrator | ...ignoring 2026-03-16 00:51:10.895484 | orchestrator | 2026-03-16 00:51:10.895488 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-16 00:51:10.895492 | orchestrator | Monday 16 March 2026 00:48:50 +0000 (0:00:03.459) 0:00:03.716 ********** 2026-03-16 00:51:10.895495 | orchestrator | skipping: [localhost] 2026-03-16 00:51:10.895499 | orchestrator | 2026-03-16 00:51:10.895503 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-16 00:51:10.895507 | orchestrator | Monday 16 March 2026 00:48:50 +0000 (0:00:00.074) 0:00:03.791 ********** 2026-03-16 00:51:10.895510 | orchestrator | ok: [localhost] 2026-03-16 00:51:10.895514 | orchestrator | 2026-03-16 00:51:10.895518 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 00:51:10.895522 | orchestrator | 2026-03-16 00:51:10.895525 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 00:51:10.895529 | orchestrator | Monday 16 March 2026 00:48:51 +0000 (0:00:00.449) 0:00:04.241 ********** 2026-03-16 00:51:10.895533 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:51:10.895536 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:51:10.895540 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:51:10.895544 | orchestrator | 2026-03-16 00:51:10.895549 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 00:51:10.895553 | orchestrator | Monday 16 March 2026 00:48:52 +0000 (0:00:01.235) 0:00:05.476 ********** 2026-03-16 00:51:10.895557 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-16 00:51:10.895562 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-16 00:51:10.895566 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-16 00:51:10.895571 | orchestrator | 2026-03-16 00:51:10.895575 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-16 00:51:10.895579 | orchestrator | 2026-03-16 00:51:10.895583 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-16 00:51:10.895587 | orchestrator | Monday 16 March 2026 00:48:54 +0000 (0:00:01.847) 0:00:07.324 ********** 2026-03-16 00:51:10.895593 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:51:10.895597 | orchestrator | 2026-03-16 00:51:10.895601 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-16 00:51:10.895605 | orchestrator | Monday 16 March 2026 00:48:54 +0000 (0:00:00.750) 0:00:08.074 ********** 2026-03-16 00:51:10.895612 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:51:10.895617 | orchestrator | 2026-03-16 00:51:10.895621 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-16 00:51:10.895625 | orchestrator | Monday 16 March 2026 00:48:55 +0000 (0:00:01.018) 0:00:09.093 ********** 2026-03-16 00:51:10.895629 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:51:10.895633 | orchestrator | 2026-03-16 00:51:10.895637 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-16 00:51:10.895642 | orchestrator | Monday 16 March 2026 00:48:56 +0000 (0:00:00.324) 0:00:09.417 ********** 2026-03-16 00:51:10.895646 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:51:10.895650 | orchestrator | 2026-03-16 00:51:10.895654 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-16 00:51:10.895658 | orchestrator | Monday 16 March 2026 00:48:56 +0000 (0:00:00.305) 0:00:09.723 ********** 2026-03-16 00:51:10.895662 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:51:10.895667 | orchestrator | 2026-03-16 00:51:10.895671 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-16 00:51:10.895675 | orchestrator | Monday 16 March 2026 00:48:56 +0000 (0:00:00.414) 0:00:10.137 ********** 2026-03-16 00:51:10.895679 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:51:10.895684 | orchestrator | 2026-03-16 00:51:10.895688 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-16 00:51:10.895692 | orchestrator | Monday 16 March 2026 00:48:57 +0000 (0:00:00.713) 0:00:10.851 ********** 2026-03-16 00:51:10.895696 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:51:10.895701 | orchestrator | 2026-03-16 00:51:10.895705 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-16 00:51:10.895711 | orchestrator | Monday 16 March 2026 00:48:58 +0000 (0:00:00.632) 0:00:11.484 ********** 2026-03-16 00:51:10.895716 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:51:10.895720 | orchestrator | 2026-03-16 00:51:10.895724 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-16 00:51:10.895728 | orchestrator | Monday 16 March 2026 00:48:59 +0000 (0:00:00.839) 0:00:12.323 ********** 2026-03-16 00:51:10.895732 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:51:10.895737 | orchestrator | 2026-03-16 00:51:10.895741 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-16 00:51:10.895745 | orchestrator | Monday 16 March 2026 00:48:59 +0000 (0:00:00.339) 0:00:12.663 ********** 2026-03-16 00:51:10.895749 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:51:10.895753 | orchestrator | 2026-03-16 00:51:10.895758 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-16 00:51:10.895762 | orchestrator | Monday 16 March 2026 00:48:59 +0000 (0:00:00.433) 0:00:13.097 ********** 2026-03-16 00:51:10.895772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:51:10.895778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:51:10.895787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:51:10.895792 | orchestrator | 2026-03-16 00:51:10.895796 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-16 00:51:10.895801 | orchestrator | Monday 16 March 2026 00:49:00 +0000 (0:00:01.020) 0:00:14.117 ********** 2026-03-16 00:51:10.895808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:51:10.895842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:51:10.895853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:51:10.895858 | orchestrator | 2026-03-16 00:51:10.895862 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-16 00:51:10.895867 | orchestrator | Monday 16 March 2026 00:49:04 +0000 (0:00:03.142) 0:00:17.260 ********** 2026-03-16 00:51:10.895871 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-16 00:51:10.895876 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-16 00:51:10.895880 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-16 00:51:10.895884 | orchestrator | 2026-03-16 00:51:10.895889 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-16 00:51:10.895893 | orchestrator | Monday 16 March 2026 00:49:06 +0000 (0:00:02.123) 0:00:19.384 ********** 2026-03-16 00:51:10.895897 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-16 00:51:10.895901 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-16 00:51:10.895906 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-16 00:51:10.895910 | orchestrator | 2026-03-16 00:51:10.895917 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-16 00:51:10.895920 | orchestrator | Monday 16 March 2026 00:49:09 +0000 (0:00:03.240) 0:00:22.624 ********** 2026-03-16 00:51:10.895924 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-16 00:51:10.895928 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-16 00:51:10.895932 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-16 00:51:10.895935 | orchestrator | 2026-03-16 00:51:10.895939 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-16 00:51:10.895943 | orchestrator | Monday 16 March 2026 00:49:10 +0000 (0:00:01.410) 0:00:24.034 ********** 2026-03-16 00:51:10.895947 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-16 00:51:10.895950 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-16 00:51:10.895954 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-16 00:51:10.895961 | orchestrator | 2026-03-16 00:51:10.895964 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-16 00:51:10.895968 | orchestrator | Monday 16 March 2026 00:49:13 +0000 (0:00:02.469) 0:00:26.504 ********** 2026-03-16 00:51:10.895974 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-16 00:51:10.895978 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-16 00:51:10.895982 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-16 00:51:10.895986 | orchestrator | 2026-03-16 00:51:10.895989 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-16 00:51:10.895993 | orchestrator | Monday 16 March 2026 00:49:14 +0000 (0:00:01.398) 0:00:27.903 ********** 2026-03-16 00:51:10.895997 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-16 00:51:10.896001 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-16 00:51:10.896004 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-16 00:51:10.896008 | orchestrator | 2026-03-16 00:51:10.896012 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-16 00:51:10.896016 | orchestrator | Monday 16 March 2026 00:49:16 +0000 (0:00:01.719) 0:00:29.622 ********** 2026-03-16 00:51:10.896019 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:51:10.896025 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:51:10.896031 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:51:10.896037 | orchestrator | 2026-03-16 00:51:10.896043 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-16 00:51:10.896048 | orchestrator | Monday 16 March 2026 00:49:16 +0000 (0:00:00.423) 0:00:30.046 ********** 2026-03-16 00:51:10.896054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:51:10.896065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:51:10.896077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:51:10.896084 | orchestrator | 2026-03-16 00:51:10.896089 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-16 00:51:10.896095 | orchestrator | Monday 16 March 2026 00:49:18 +0000 (0:00:01.703) 0:00:31.749 ********** 2026-03-16 00:51:10.896100 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:51:10.896106 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:51:10.896113 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:51:10.896119 | orchestrator | 2026-03-16 00:51:10.896125 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-16 00:51:10.896131 | orchestrator | Monday 16 March 2026 00:49:19 +0000 (0:00:00.934) 0:00:32.684 ********** 2026-03-16 00:51:10.896137 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:51:10.896143 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:51:10.896150 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:51:10.896156 | orchestrator | 2026-03-16 00:51:10.896163 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-16 00:51:10.896169 | orchestrator | Monday 16 March 2026 00:49:26 +0000 (0:00:07.231) 0:00:39.915 ********** 2026-03-16 00:51:10.896173 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:51:10.896176 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:51:10.896180 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:51:10.896184 | orchestrator | 2026-03-16 00:51:10.896187 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-16 00:51:10.896191 | orchestrator | 2026-03-16 00:51:10.896195 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-16 00:51:10.896198 | orchestrator | Monday 16 March 2026 00:49:27 +0000 (0:00:00.443) 0:00:40.358 ********** 2026-03-16 00:51:10.896202 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:51:10.896206 | orchestrator | 2026-03-16 00:51:10.896209 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-16 00:51:10.896213 | orchestrator | Monday 16 March 2026 00:49:27 +0000 (0:00:00.613) 0:00:40.971 ********** 2026-03-16 00:51:10.896217 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:51:10.896220 | orchestrator | 2026-03-16 00:51:10.896224 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-16 00:51:10.896228 | orchestrator | Monday 16 March 2026 00:49:28 +0000 (0:00:00.392) 0:00:41.364 ********** 2026-03-16 00:51:10.896232 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:51:10.896235 | orchestrator | 2026-03-16 00:51:10.896239 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-16 00:51:10.896243 | orchestrator | Monday 16 March 2026 00:49:29 +0000 (0:00:01.777) 0:00:43.145 ********** 2026-03-16 00:51:10.896246 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:51:10.896250 | orchestrator | 2026-03-16 00:51:10.896254 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-16 00:51:10.896261 | orchestrator | 2026-03-16 00:51:10.896265 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-16 00:51:10.896268 | orchestrator | Monday 16 March 2026 00:50:27 +0000 (0:00:57.667) 0:01:40.812 ********** 2026-03-16 00:51:10.896297 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:51:10.896301 | orchestrator | 2026-03-16 00:51:10.896304 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-16 00:51:10.896308 | orchestrator | Monday 16 March 2026 00:50:28 +0000 (0:00:00.618) 0:01:41.430 ********** 2026-03-16 00:51:10.896312 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:51:10.896315 | orchestrator | 2026-03-16 00:51:10.896319 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-16 00:51:10.896323 | orchestrator | Monday 16 March 2026 00:50:28 +0000 (0:00:00.239) 0:01:41.670 ********** 2026-03-16 00:51:10.896327 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:51:10.896330 | orchestrator | 2026-03-16 00:51:10.896334 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-16 00:51:10.896338 | orchestrator | Monday 16 March 2026 00:50:31 +0000 (0:00:02.600) 0:01:44.270 ********** 2026-03-16 00:51:10.896341 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:51:10.896345 | orchestrator | 2026-03-16 00:51:10.896349 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-16 00:51:10.896352 | orchestrator | 2026-03-16 00:51:10.896356 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-16 00:51:10.896362 | orchestrator | Monday 16 March 2026 00:50:46 +0000 (0:00:15.690) 0:01:59.961 ********** 2026-03-16 00:51:10.896366 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:51:10.896370 | orchestrator | 2026-03-16 00:51:10.896374 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-16 00:51:10.896378 | orchestrator | Monday 16 March 2026 00:50:47 +0000 (0:00:00.667) 0:02:00.629 ********** 2026-03-16 00:51:10.896381 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:51:10.896385 | orchestrator | 2026-03-16 00:51:10.896389 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-16 00:51:10.896393 | orchestrator | Monday 16 March 2026 00:50:47 +0000 (0:00:00.329) 0:02:00.959 ********** 2026-03-16 00:51:10.896396 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:51:10.896400 | orchestrator | 2026-03-16 00:51:10.896404 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-16 00:51:10.896408 | orchestrator | Monday 16 March 2026 00:50:49 +0000 (0:00:01.902) 0:02:02.861 ********** 2026-03-16 00:51:10.896411 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:51:10.896415 | orchestrator | 2026-03-16 00:51:10.896419 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-16 00:51:10.896423 | orchestrator | 2026-03-16 00:51:10.896426 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-16 00:51:10.896430 | orchestrator | Monday 16 March 2026 00:51:05 +0000 (0:00:16.059) 0:02:18.921 ********** 2026-03-16 00:51:10.896434 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:51:10.896437 | orchestrator | 2026-03-16 00:51:10.896444 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-16 00:51:10.896448 | orchestrator | Monday 16 March 2026 00:51:06 +0000 (0:00:00.481) 0:02:19.402 ********** 2026-03-16 00:51:10.896452 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-16 00:51:10.896455 | orchestrator | enable_outward_rabbitmq_True 2026-03-16 00:51:10.896459 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-16 00:51:10.896463 | orchestrator | outward_rabbitmq_restart 2026-03-16 00:51:10.896467 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:51:10.896470 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:51:10.896474 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:51:10.896478 | orchestrator | 2026-03-16 00:51:10.896482 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-16 00:51:10.896489 | orchestrator | skipping: no hosts matched 2026-03-16 00:51:10.896493 | orchestrator | 2026-03-16 00:51:10.896496 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-16 00:51:10.896500 | orchestrator | skipping: no hosts matched 2026-03-16 00:51:10.896504 | orchestrator | 2026-03-16 00:51:10.896508 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-16 00:51:10.896511 | orchestrator | skipping: no hosts matched 2026-03-16 00:51:10.896515 | orchestrator | 2026-03-16 00:51:10.896519 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:51:10.896523 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-16 00:51:10.896527 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-16 00:51:10.896531 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:51:10.896535 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 00:51:10.896539 | orchestrator | 2026-03-16 00:51:10.896542 | orchestrator | 2026-03-16 00:51:10.896546 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:51:10.896550 | orchestrator | Monday 16 March 2026 00:51:08 +0000 (0:00:02.300) 0:02:21.703 ********** 2026-03-16 00:51:10.896554 | orchestrator | =============================================================================== 2026-03-16 00:51:10.896557 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 89.42s 2026-03-16 00:51:10.896561 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.23s 2026-03-16 00:51:10.896565 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.28s 2026-03-16 00:51:10.896569 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.46s 2026-03-16 00:51:10.896572 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.24s 2026-03-16 00:51:10.896576 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.14s 2026-03-16 00:51:10.896580 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.47s 2026-03-16 00:51:10.896583 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.30s 2026-03-16 00:51:10.896587 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.12s 2026-03-16 00:51:10.896591 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.90s 2026-03-16 00:51:10.896594 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.85s 2026-03-16 00:51:10.896598 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.72s 2026-03-16 00:51:10.896602 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.70s 2026-03-16 00:51:10.896606 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.41s 2026-03-16 00:51:10.896609 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.40s 2026-03-16 00:51:10.896615 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.24s 2026-03-16 00:51:10.896619 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.02s 2026-03-16 00:51:10.896623 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.02s 2026-03-16 00:51:10.896627 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.96s 2026-03-16 00:51:10.896630 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.93s 2026-03-16 00:51:10.896634 | orchestrator | 2026-03-16 00:51:10 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:10.896641 | orchestrator | 2026-03-16 00:51:10 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:10.896645 | orchestrator | 2026-03-16 00:51:10 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:10.896649 | orchestrator | 2026-03-16 00:51:10 | INFO  | Task 286dcda1-9059-41c5-afad-f27ec04ba258 is in state SUCCESS 2026-03-16 00:51:10.896653 | orchestrator | 2026-03-16 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:13.931600 | orchestrator | 2026-03-16 00:51:13 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:13.933931 | orchestrator | 2026-03-16 00:51:13 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:13.936965 | orchestrator | 2026-03-16 00:51:13 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:13.937731 | orchestrator | 2026-03-16 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:16.975682 | orchestrator | 2026-03-16 00:51:16 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:16.977334 | orchestrator | 2026-03-16 00:51:16 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:16.979004 | orchestrator | 2026-03-16 00:51:16 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:16.979051 | orchestrator | 2026-03-16 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:20.013841 | orchestrator | 2026-03-16 00:51:20 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:20.014938 | orchestrator | 2026-03-16 00:51:20 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:20.016802 | orchestrator | 2026-03-16 00:51:20 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:20.016883 | orchestrator | 2026-03-16 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:23.051692 | orchestrator | 2026-03-16 00:51:23 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:23.053021 | orchestrator | 2026-03-16 00:51:23 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:23.054681 | orchestrator | 2026-03-16 00:51:23 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:23.054878 | orchestrator | 2026-03-16 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:26.151216 | orchestrator | 2026-03-16 00:51:26 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:26.151585 | orchestrator | 2026-03-16 00:51:26 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:26.152408 | orchestrator | 2026-03-16 00:51:26 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:26.152449 | orchestrator | 2026-03-16 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:29.202711 | orchestrator | 2026-03-16 00:51:29 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:29.207566 | orchestrator | 2026-03-16 00:51:29 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:29.209305 | orchestrator | 2026-03-16 00:51:29 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:29.209353 | orchestrator | 2026-03-16 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:32.258793 | orchestrator | 2026-03-16 00:51:32 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:32.258923 | orchestrator | 2026-03-16 00:51:32 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:32.258944 | orchestrator | 2026-03-16 00:51:32 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:32.258958 | orchestrator | 2026-03-16 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:35.269748 | orchestrator | 2026-03-16 00:51:35 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:35.272211 | orchestrator | 2026-03-16 00:51:35 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:35.272329 | orchestrator | 2026-03-16 00:51:35 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:35.272337 | orchestrator | 2026-03-16 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:38.305679 | orchestrator | 2026-03-16 00:51:38 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:38.307322 | orchestrator | 2026-03-16 00:51:38 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:38.308829 | orchestrator | 2026-03-16 00:51:38 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:38.309074 | orchestrator | 2026-03-16 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:41.354689 | orchestrator | 2026-03-16 00:51:41 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:41.357255 | orchestrator | 2026-03-16 00:51:41 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:41.358805 | orchestrator | 2026-03-16 00:51:41 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:41.359303 | orchestrator | 2026-03-16 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:44.394895 | orchestrator | 2026-03-16 00:51:44 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:44.396861 | orchestrator | 2026-03-16 00:51:44 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:44.397089 | orchestrator | 2026-03-16 00:51:44 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:44.397486 | orchestrator | 2026-03-16 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:47.441406 | orchestrator | 2026-03-16 00:51:47 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:47.443787 | orchestrator | 2026-03-16 00:51:47 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:47.445867 | orchestrator | 2026-03-16 00:51:47 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:47.445929 | orchestrator | 2026-03-16 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:50.490371 | orchestrator | 2026-03-16 00:51:50 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:50.492247 | orchestrator | 2026-03-16 00:51:50 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:50.492304 | orchestrator | 2026-03-16 00:51:50 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:50.492314 | orchestrator | 2026-03-16 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:53.530621 | orchestrator | 2026-03-16 00:51:53 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:53.532519 | orchestrator | 2026-03-16 00:51:53 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:53.534148 | orchestrator | 2026-03-16 00:51:53 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:53.534218 | orchestrator | 2026-03-16 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:56.578227 | orchestrator | 2026-03-16 00:51:56 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:56.580267 | orchestrator | 2026-03-16 00:51:56 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:56.582263 | orchestrator | 2026-03-16 00:51:56 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:56.582876 | orchestrator | 2026-03-16 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:51:59.618329 | orchestrator | 2026-03-16 00:51:59 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:51:59.619147 | orchestrator | 2026-03-16 00:51:59 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state STARTED 2026-03-16 00:51:59.620299 | orchestrator | 2026-03-16 00:51:59 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:51:59.620337 | orchestrator | 2026-03-16 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:02.659568 | orchestrator | 2026-03-16 00:52:02 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:02.662792 | orchestrator | 2026-03-16 00:52:02 | INFO  | Task c96e2f8a-52ca-433e-ac09-879563aff87f is in state SUCCESS 2026-03-16 00:52:02.663936 | orchestrator | 2026-03-16 00:52:02.663983 | orchestrator | 2026-03-16 00:52:02.663998 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 00:52:02.664011 | orchestrator | 2026-03-16 00:52:02.664023 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 00:52:02.664036 | orchestrator | Monday 16 March 2026 00:49:34 +0000 (0:00:00.152) 0:00:00.152 ********** 2026-03-16 00:52:02.664047 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:52:02.664060 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:52:02.664071 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:52:02.664082 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.664093 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.664104 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.664115 | orchestrator | 2026-03-16 00:52:02.664126 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 00:52:02.664137 | orchestrator | Monday 16 March 2026 00:49:35 +0000 (0:00:00.940) 0:00:01.093 ********** 2026-03-16 00:52:02.664149 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-16 00:52:02.664206 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-16 00:52:02.664249 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-16 00:52:02.664342 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-16 00:52:02.664369 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-16 00:52:02.664499 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-16 00:52:02.664525 | orchestrator | 2026-03-16 00:52:02.664547 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-16 00:52:02.664568 | orchestrator | 2026-03-16 00:52:02.664589 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-16 00:52:02.664608 | orchestrator | Monday 16 March 2026 00:49:36 +0000 (0:00:01.092) 0:00:02.185 ********** 2026-03-16 00:52:02.664621 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:52:02.664634 | orchestrator | 2026-03-16 00:52:02.664645 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-16 00:52:02.664680 | orchestrator | Monday 16 March 2026 00:49:37 +0000 (0:00:00.969) 0:00:03.155 ********** 2026-03-16 00:52:02.664694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664709 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664785 | orchestrator | 2026-03-16 00:52:02.664796 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-16 00:52:02.664807 | orchestrator | Monday 16 March 2026 00:49:38 +0000 (0:00:01.214) 0:00:04.370 ********** 2026-03-16 00:52:02.664818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664902 | orchestrator | 2026-03-16 00:52:02.664913 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-16 00:52:02.664924 | orchestrator | Monday 16 March 2026 00:49:40 +0000 (0:00:01.778) 0:00:06.148 ********** 2026-03-16 00:52:02.664935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664966 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.664977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665023 | orchestrator | 2026-03-16 00:52:02.665034 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-16 00:52:02.665045 | orchestrator | Monday 16 March 2026 00:49:42 +0000 (0:00:01.324) 0:00:07.473 ********** 2026-03-16 00:52:02.665056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665067 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665133 | orchestrator | 2026-03-16 00:52:02.665151 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-16 00:52:02.665197 | orchestrator | Monday 16 March 2026 00:49:43 +0000 (0:00:01.403) 0:00:08.876 ********** 2026-03-16 00:52:02.665221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665239 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.665332 | orchestrator | 2026-03-16 00:52:02.665352 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-16 00:52:02.665370 | orchestrator | Monday 16 March 2026 00:49:44 +0000 (0:00:01.422) 0:00:10.299 ********** 2026-03-16 00:52:02.665389 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:52:02.665406 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:52:02.665436 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:52:02.665454 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:52:02.665471 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:52:02.665488 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:52:02.665506 | orchestrator | 2026-03-16 00:52:02.665524 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-16 00:52:02.665542 | orchestrator | Monday 16 March 2026 00:49:47 +0000 (0:00:02.749) 0:00:13.048 ********** 2026-03-16 00:52:02.665561 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-16 00:52:02.665579 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-16 00:52:02.665609 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-16 00:52:02.665638 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-16 00:52:02.665660 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-16 00:52:02.665680 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-16 00:52:02.665699 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-16 00:52:02.665718 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-16 00:52:02.665730 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-16 00:52:02.665741 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-16 00:52:02.665752 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-16 00:52:02.665769 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-16 00:52:02.665780 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-16 00:52:02.665793 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-16 00:52:02.665804 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-16 00:52:02.665815 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-16 00:52:02.665825 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-16 00:52:02.665836 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-16 00:52:02.665847 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-16 00:52:02.665859 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-16 00:52:02.665870 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-16 00:52:02.665880 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-16 00:52:02.665891 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-16 00:52:02.665902 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-16 00:52:02.665912 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-16 00:52:02.665923 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-16 00:52:02.665933 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-16 00:52:02.665944 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-16 00:52:02.665955 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-16 00:52:02.665966 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-16 00:52:02.665976 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-16 00:52:02.665995 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-16 00:52:02.666006 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-16 00:52:02.666084 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-16 00:52:02.666144 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-16 00:52:02.666203 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-16 00:52:02.666216 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-16 00:52:02.666227 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-16 00:52:02.666238 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-16 00:52:02.666249 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-16 00:52:02.666270 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-16 00:52:02.666282 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-16 00:52:02.666294 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-16 00:52:02.666313 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-16 00:52:02.666331 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-16 00:52:02.666348 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-16 00:52:02.666373 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-16 00:52:02.666391 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-16 00:52:02.666408 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-16 00:52:02.666424 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-16 00:52:02.666441 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-16 00:52:02.666458 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-16 00:52:02.666476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-16 00:52:02.666492 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-16 00:52:02.666510 | orchestrator | 2026-03-16 00:52:02.666528 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-16 00:52:02.666548 | orchestrator | Monday 16 March 2026 00:50:09 +0000 (0:00:21.625) 0:00:34.674 ********** 2026-03-16 00:52:02.666566 | orchestrator | 2026-03-16 00:52:02.666584 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-16 00:52:02.666599 | orchestrator | Monday 16 March 2026 00:50:09 +0000 (0:00:00.157) 0:00:34.832 ********** 2026-03-16 00:52:02.666610 | orchestrator | 2026-03-16 00:52:02.666621 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-16 00:52:02.666643 | orchestrator | Monday 16 March 2026 00:50:09 +0000 (0:00:00.138) 0:00:34.970 ********** 2026-03-16 00:52:02.666654 | orchestrator | 2026-03-16 00:52:02.666665 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-16 00:52:02.666676 | orchestrator | Monday 16 March 2026 00:50:09 +0000 (0:00:00.064) 0:00:35.035 ********** 2026-03-16 00:52:02.666686 | orchestrator | 2026-03-16 00:52:02.666697 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-16 00:52:02.666708 | orchestrator | Monday 16 March 2026 00:50:09 +0000 (0:00:00.064) 0:00:35.099 ********** 2026-03-16 00:52:02.666718 | orchestrator | 2026-03-16 00:52:02.666729 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-16 00:52:02.666740 | orchestrator | Monday 16 March 2026 00:50:09 +0000 (0:00:00.050) 0:00:35.150 ********** 2026-03-16 00:52:02.666751 | orchestrator | 2026-03-16 00:52:02.666761 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-16 00:52:02.666773 | orchestrator | Monday 16 March 2026 00:50:09 +0000 (0:00:00.060) 0:00:35.210 ********** 2026-03-16 00:52:02.666783 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:52:02.666795 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:52:02.666806 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:52:02.666817 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.666827 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.666838 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.666848 | orchestrator | 2026-03-16 00:52:02.666859 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-16 00:52:02.666870 | orchestrator | Monday 16 March 2026 00:50:11 +0000 (0:00:01.640) 0:00:36.851 ********** 2026-03-16 00:52:02.666881 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:52:02.666892 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:52:02.666903 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:52:02.666913 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:52:02.666924 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:52:02.666934 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:52:02.666945 | orchestrator | 2026-03-16 00:52:02.666956 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-16 00:52:02.666968 | orchestrator | 2026-03-16 00:52:02.666986 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-16 00:52:02.667004 | orchestrator | Monday 16 March 2026 00:50:40 +0000 (0:00:29.442) 0:01:06.294 ********** 2026-03-16 00:52:02.667023 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:52:02.667050 | orchestrator | 2026-03-16 00:52:02.667069 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-16 00:52:02.667086 | orchestrator | Monday 16 March 2026 00:50:42 +0000 (0:00:01.548) 0:01:07.843 ********** 2026-03-16 00:52:02.667105 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:52:02.667122 | orchestrator | 2026-03-16 00:52:02.667154 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-16 00:52:02.667247 | orchestrator | Monday 16 March 2026 00:50:43 +0000 (0:00:00.791) 0:01:08.634 ********** 2026-03-16 00:52:02.667267 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.667285 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.667304 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.667315 | orchestrator | 2026-03-16 00:52:02.667327 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-16 00:52:02.667338 | orchestrator | Monday 16 March 2026 00:50:44 +0000 (0:00:01.071) 0:01:09.706 ********** 2026-03-16 00:52:02.667348 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.667359 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.667370 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.667381 | orchestrator | 2026-03-16 00:52:02.667392 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-16 00:52:02.667414 | orchestrator | Monday 16 March 2026 00:50:44 +0000 (0:00:00.353) 0:01:10.060 ********** 2026-03-16 00:52:02.667425 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.667436 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.667447 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.667471 | orchestrator | 2026-03-16 00:52:02.667483 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-16 00:52:02.667494 | orchestrator | Monday 16 March 2026 00:50:44 +0000 (0:00:00.329) 0:01:10.390 ********** 2026-03-16 00:52:02.667505 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.667516 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.667527 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.667537 | orchestrator | 2026-03-16 00:52:02.667548 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-16 00:52:02.667559 | orchestrator | Monday 16 March 2026 00:50:45 +0000 (0:00:00.420) 0:01:10.811 ********** 2026-03-16 00:52:02.667570 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.667581 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.667592 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.667606 | orchestrator | 2026-03-16 00:52:02.667629 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-16 00:52:02.667656 | orchestrator | Monday 16 March 2026 00:50:46 +0000 (0:00:00.641) 0:01:11.452 ********** 2026-03-16 00:52:02.667674 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.667692 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.667709 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.667725 | orchestrator | 2026-03-16 00:52:02.667740 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-16 00:52:02.667757 | orchestrator | Monday 16 March 2026 00:50:46 +0000 (0:00:00.313) 0:01:11.765 ********** 2026-03-16 00:52:02.667775 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.667792 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.667812 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.667832 | orchestrator | 2026-03-16 00:52:02.667848 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-16 00:52:02.667864 | orchestrator | Monday 16 March 2026 00:50:46 +0000 (0:00:00.437) 0:01:12.203 ********** 2026-03-16 00:52:02.667881 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.667893 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.667902 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.667912 | orchestrator | 2026-03-16 00:52:02.667921 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-16 00:52:02.667931 | orchestrator | Monday 16 March 2026 00:50:47 +0000 (0:00:00.333) 0:01:12.536 ********** 2026-03-16 00:52:02.667941 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.667950 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.667960 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.667969 | orchestrator | 2026-03-16 00:52:02.667979 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-16 00:52:02.667989 | orchestrator | Monday 16 March 2026 00:50:47 +0000 (0:00:00.560) 0:01:13.096 ********** 2026-03-16 00:52:02.667998 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668008 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668017 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668027 | orchestrator | 2026-03-16 00:52:02.668036 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-16 00:52:02.668046 | orchestrator | Monday 16 March 2026 00:50:48 +0000 (0:00:00.338) 0:01:13.434 ********** 2026-03-16 00:52:02.668055 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668064 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668074 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668083 | orchestrator | 2026-03-16 00:52:02.668093 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-16 00:52:02.668102 | orchestrator | Monday 16 March 2026 00:50:48 +0000 (0:00:00.385) 0:01:13.820 ********** 2026-03-16 00:52:02.668123 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668133 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668143 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668152 | orchestrator | 2026-03-16 00:52:02.668186 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-16 00:52:02.668196 | orchestrator | Monday 16 March 2026 00:50:48 +0000 (0:00:00.392) 0:01:14.212 ********** 2026-03-16 00:52:02.668206 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668215 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668225 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668235 | orchestrator | 2026-03-16 00:52:02.668245 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-16 00:52:02.668254 | orchestrator | Monday 16 March 2026 00:50:49 +0000 (0:00:00.666) 0:01:14.879 ********** 2026-03-16 00:52:02.668264 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668274 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668283 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668293 | orchestrator | 2026-03-16 00:52:02.668303 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-16 00:52:02.668313 | orchestrator | Monday 16 March 2026 00:50:49 +0000 (0:00:00.440) 0:01:15.319 ********** 2026-03-16 00:52:02.668323 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668333 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668343 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668352 | orchestrator | 2026-03-16 00:52:02.668371 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-16 00:52:02.668381 | orchestrator | Monday 16 March 2026 00:50:50 +0000 (0:00:00.412) 0:01:15.731 ********** 2026-03-16 00:52:02.668391 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668401 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668411 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668421 | orchestrator | 2026-03-16 00:52:02.668431 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-16 00:52:02.668441 | orchestrator | Monday 16 March 2026 00:50:50 +0000 (0:00:00.395) 0:01:16.127 ********** 2026-03-16 00:52:02.668450 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668460 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668470 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668479 | orchestrator | 2026-03-16 00:52:02.668489 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-16 00:52:02.668499 | orchestrator | Monday 16 March 2026 00:50:51 +0000 (0:00:00.400) 0:01:16.527 ********** 2026-03-16 00:52:02.668515 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:52:02.668526 | orchestrator | 2026-03-16 00:52:02.668536 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-16 00:52:02.668545 | orchestrator | Monday 16 March 2026 00:50:52 +0000 (0:00:00.932) 0:01:17.460 ********** 2026-03-16 00:52:02.668555 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.668565 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.668574 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.668584 | orchestrator | 2026-03-16 00:52:02.668594 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-16 00:52:02.668604 | orchestrator | Monday 16 March 2026 00:50:52 +0000 (0:00:00.468) 0:01:17.928 ********** 2026-03-16 00:52:02.668614 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.668623 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.668633 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.668643 | orchestrator | 2026-03-16 00:52:02.668652 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-16 00:52:02.668662 | orchestrator | Monday 16 March 2026 00:50:53 +0000 (0:00:00.497) 0:01:18.426 ********** 2026-03-16 00:52:02.668672 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668682 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668697 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668707 | orchestrator | 2026-03-16 00:52:02.668717 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-16 00:52:02.668727 | orchestrator | Monday 16 March 2026 00:50:53 +0000 (0:00:00.699) 0:01:19.125 ********** 2026-03-16 00:52:02.668736 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668746 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668755 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668765 | orchestrator | 2026-03-16 00:52:02.668775 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-16 00:52:02.668785 | orchestrator | Monday 16 March 2026 00:50:54 +0000 (0:00:00.410) 0:01:19.536 ********** 2026-03-16 00:52:02.668795 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668805 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668814 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668824 | orchestrator | 2026-03-16 00:52:02.668833 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-16 00:52:02.668843 | orchestrator | Monday 16 March 2026 00:50:54 +0000 (0:00:00.401) 0:01:19.938 ********** 2026-03-16 00:52:02.668853 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668862 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668872 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668881 | orchestrator | 2026-03-16 00:52:02.668891 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-16 00:52:02.668901 | orchestrator | Monday 16 March 2026 00:50:54 +0000 (0:00:00.363) 0:01:20.301 ********** 2026-03-16 00:52:02.668910 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668920 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668930 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668939 | orchestrator | 2026-03-16 00:52:02.668949 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-16 00:52:02.668959 | orchestrator | Monday 16 March 2026 00:50:55 +0000 (0:00:00.701) 0:01:21.003 ********** 2026-03-16 00:52:02.668969 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.668978 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.668988 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.668997 | orchestrator | 2026-03-16 00:52:02.669007 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-16 00:52:02.669017 | orchestrator | Monday 16 March 2026 00:50:55 +0000 (0:00:00.357) 0:01:21.361 ********** 2026-03-16 00:52:02.669028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669139 | orchestrator | 2026-03-16 00:52:02.669149 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-16 00:52:02.669181 | orchestrator | Monday 16 March 2026 00:50:57 +0000 (0:00:01.600) 0:01:22.961 ********** 2026-03-16 00:52:02.669192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669334 | orchestrator | 2026-03-16 00:52:02.669344 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-16 00:52:02.669354 | orchestrator | Monday 16 March 2026 00:51:01 +0000 (0:00:04.265) 0:01:27.227 ********** 2026-03-16 00:52:02.669364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.669481 | orchestrator | 2026-03-16 00:52:02.669491 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-16 00:52:02.669501 | orchestrator | Monday 16 March 2026 00:51:04 +0000 (0:00:02.344) 0:01:29.571 ********** 2026-03-16 00:52:02.669510 | orchestrator | 2026-03-16 00:52:02.669520 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-16 00:52:02.669530 | orchestrator | Monday 16 March 2026 00:51:04 +0000 (0:00:00.126) 0:01:29.697 ********** 2026-03-16 00:52:02.669539 | orchestrator | 2026-03-16 00:52:02.669549 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-16 00:52:02.669559 | orchestrator | Monday 16 March 2026 00:51:04 +0000 (0:00:00.067) 0:01:29.765 ********** 2026-03-16 00:52:02.669568 | orchestrator | 2026-03-16 00:52:02.669578 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-16 00:52:02.669588 | orchestrator | Monday 16 March 2026 00:51:04 +0000 (0:00:00.066) 0:01:29.831 ********** 2026-03-16 00:52:02.669597 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:52:02.669607 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:52:02.669616 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:52:02.669626 | orchestrator | 2026-03-16 00:52:02.669636 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-16 00:52:02.669646 | orchestrator | Monday 16 March 2026 00:51:12 +0000 (0:00:07.864) 0:01:37.696 ********** 2026-03-16 00:52:02.669661 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:52:02.669671 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:52:02.669680 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:52:02.669690 | orchestrator | 2026-03-16 00:52:02.669699 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-16 00:52:02.669709 | orchestrator | Monday 16 March 2026 00:51:14 +0000 (0:00:02.641) 0:01:40.337 ********** 2026-03-16 00:52:02.669719 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:52:02.669728 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:52:02.669738 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:52:02.669748 | orchestrator | 2026-03-16 00:52:02.669757 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-16 00:52:02.669767 | orchestrator | Monday 16 March 2026 00:51:22 +0000 (0:00:07.579) 0:01:47.917 ********** 2026-03-16 00:52:02.669777 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.669786 | orchestrator | 2026-03-16 00:52:02.669796 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-16 00:52:02.669806 | orchestrator | Monday 16 March 2026 00:51:22 +0000 (0:00:00.105) 0:01:48.022 ********** 2026-03-16 00:52:02.669816 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.669826 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.669835 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.669845 | orchestrator | 2026-03-16 00:52:02.669860 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-16 00:52:02.669870 | orchestrator | Monday 16 March 2026 00:51:23 +0000 (0:00:00.818) 0:01:48.840 ********** 2026-03-16 00:52:02.669880 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.669889 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.669899 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:52:02.669909 | orchestrator | 2026-03-16 00:52:02.669918 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-16 00:52:02.669928 | orchestrator | Monday 16 March 2026 00:51:24 +0000 (0:00:00.906) 0:01:49.747 ********** 2026-03-16 00:52:02.669938 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.669947 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.669957 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.669966 | orchestrator | 2026-03-16 00:52:02.669976 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-16 00:52:02.669986 | orchestrator | Monday 16 March 2026 00:51:25 +0000 (0:00:00.786) 0:01:50.534 ********** 2026-03-16 00:52:02.669996 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.670005 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.670066 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:52:02.670079 | orchestrator | 2026-03-16 00:52:02.670094 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-16 00:52:02.670104 | orchestrator | Monday 16 March 2026 00:51:25 +0000 (0:00:00.776) 0:01:51.310 ********** 2026-03-16 00:52:02.670114 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.670123 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.670133 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.670143 | orchestrator | 2026-03-16 00:52:02.670152 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-16 00:52:02.670212 | orchestrator | Monday 16 March 2026 00:51:26 +0000 (0:00:00.824) 0:01:52.135 ********** 2026-03-16 00:52:02.670223 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.670233 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.670242 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.670252 | orchestrator | 2026-03-16 00:52:02.670261 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-16 00:52:02.670271 | orchestrator | Monday 16 March 2026 00:51:27 +0000 (0:00:00.770) 0:01:52.906 ********** 2026-03-16 00:52:02.670280 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.670290 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.670298 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.670306 | orchestrator | 2026-03-16 00:52:02.670314 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-16 00:52:02.670328 | orchestrator | Monday 16 March 2026 00:51:27 +0000 (0:00:00.272) 0:01:53.179 ********** 2026-03-16 00:52:02.670337 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670345 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670354 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670362 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670372 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670384 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670406 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670426 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670441 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670462 | orchestrator | 2026-03-16 00:52:02.670475 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-16 00:52:02.670484 | orchestrator | Monday 16 March 2026 00:51:29 +0000 (0:00:01.529) 0:01:54.708 ********** 2026-03-16 00:52:02.670492 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670500 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670509 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670517 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670548 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670587 | orchestrator | 2026-03-16 00:52:02.670595 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-16 00:52:02.670604 | orchestrator | Monday 16 March 2026 00:51:34 +0000 (0:00:04.741) 0:01:59.449 ********** 2026-03-16 00:52:02.670612 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670620 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670629 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670645 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670686 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 00:52:02.670701 | orchestrator | 2026-03-16 00:52:02.670709 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-16 00:52:02.670721 | orchestrator | Monday 16 March 2026 00:51:36 +0000 (0:00:02.744) 0:02:02.194 ********** 2026-03-16 00:52:02.670730 | orchestrator | 2026-03-16 00:52:02.670738 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-16 00:52:02.670746 | orchestrator | Monday 16 March 2026 00:51:36 +0000 (0:00:00.061) 0:02:02.255 ********** 2026-03-16 00:52:02.670754 | orchestrator | 2026-03-16 00:52:02.670762 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-16 00:52:02.670770 | orchestrator | Monday 16 March 2026 00:51:36 +0000 (0:00:00.066) 0:02:02.321 ********** 2026-03-16 00:52:02.670778 | orchestrator | 2026-03-16 00:52:02.670787 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-16 00:52:02.670794 | orchestrator | Monday 16 March 2026 00:51:36 +0000 (0:00:00.070) 0:02:02.391 ********** 2026-03-16 00:52:02.670802 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:52:02.670810 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:52:02.670819 | orchestrator | 2026-03-16 00:52:02.670827 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-16 00:52:02.670835 | orchestrator | Monday 16 March 2026 00:51:43 +0000 (0:00:06.459) 0:02:08.851 ********** 2026-03-16 00:52:02.670843 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:52:02.670851 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:52:02.670859 | orchestrator | 2026-03-16 00:52:02.670867 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-16 00:52:02.670875 | orchestrator | Monday 16 March 2026 00:51:49 +0000 (0:00:06.492) 0:02:15.343 ********** 2026-03-16 00:52:02.670883 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:52:02.670891 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:52:02.670899 | orchestrator | 2026-03-16 00:52:02.670907 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-16 00:52:02.670915 | orchestrator | Monday 16 March 2026 00:51:56 +0000 (0:00:06.978) 0:02:22.322 ********** 2026-03-16 00:52:02.670923 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:52:02.670931 | orchestrator | 2026-03-16 00:52:02.670939 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-16 00:52:02.670947 | orchestrator | Monday 16 March 2026 00:51:57 +0000 (0:00:00.144) 0:02:22.467 ********** 2026-03-16 00:52:02.670956 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.670964 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.670972 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.670980 | orchestrator | 2026-03-16 00:52:02.670988 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-16 00:52:02.670996 | orchestrator | Monday 16 March 2026 00:51:57 +0000 (0:00:00.835) 0:02:23.303 ********** 2026-03-16 00:52:02.671004 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.671012 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.671020 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:52:02.671028 | orchestrator | 2026-03-16 00:52:02.671036 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-16 00:52:02.671044 | orchestrator | Monday 16 March 2026 00:51:58 +0000 (0:00:00.676) 0:02:23.979 ********** 2026-03-16 00:52:02.671052 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.671060 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.671068 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.671076 | orchestrator | 2026-03-16 00:52:02.671084 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-16 00:52:02.671092 | orchestrator | Monday 16 March 2026 00:51:59 +0000 (0:00:00.803) 0:02:24.783 ********** 2026-03-16 00:52:02.671100 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:52:02.671108 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:52:02.671116 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:52:02.671130 | orchestrator | 2026-03-16 00:52:02.671138 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-16 00:52:02.671146 | orchestrator | Monday 16 March 2026 00:52:00 +0000 (0:00:00.653) 0:02:25.436 ********** 2026-03-16 00:52:02.671155 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.671188 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.671202 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.671214 | orchestrator | 2026-03-16 00:52:02.671228 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-16 00:52:02.671242 | orchestrator | Monday 16 March 2026 00:52:00 +0000 (0:00:00.869) 0:02:26.305 ********** 2026-03-16 00:52:02.671256 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:52:02.671269 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:52:02.671280 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:52:02.671288 | orchestrator | 2026-03-16 00:52:02.671296 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:52:02.671304 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-16 00:52:02.671313 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-16 00:52:02.671328 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-16 00:52:02.671337 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:52:02.671351 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:52:02.671364 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 00:52:02.671376 | orchestrator | 2026-03-16 00:52:02.671389 | orchestrator | 2026-03-16 00:52:02.671404 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:52:02.671417 | orchestrator | Monday 16 March 2026 00:52:01 +0000 (0:00:00.993) 0:02:27.298 ********** 2026-03-16 00:52:02.671435 | orchestrator | =============================================================================== 2026-03-16 00:52:02.671444 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.44s 2026-03-16 00:52:02.671452 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.63s 2026-03-16 00:52:02.671460 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.56s 2026-03-16 00:52:02.671468 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.32s 2026-03-16 00:52:02.671476 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.13s 2026-03-16 00:52:02.671484 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.74s 2026-03-16 00:52:02.671492 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.27s 2026-03-16 00:52:02.671499 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.75s 2026-03-16 00:52:02.671507 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.74s 2026-03-16 00:52:02.671515 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.34s 2026-03-16 00:52:02.671523 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.78s 2026-03-16 00:52:02.671530 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.64s 2026-03-16 00:52:02.671538 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.60s 2026-03-16 00:52:02.671546 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.55s 2026-03-16 00:52:02.671554 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.53s 2026-03-16 00:52:02.671571 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.42s 2026-03-16 00:52:02.671580 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.40s 2026-03-16 00:52:02.671588 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.32s 2026-03-16 00:52:02.671595 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.21s 2026-03-16 00:52:02.671604 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.09s 2026-03-16 00:52:02.671612 | orchestrator | 2026-03-16 00:52:02 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:02.671620 | orchestrator | 2026-03-16 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:05.714264 | orchestrator | 2026-03-16 00:52:05 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:05.715687 | orchestrator | 2026-03-16 00:52:05 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:05.716029 | orchestrator | 2026-03-16 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:08.759368 | orchestrator | 2026-03-16 00:52:08 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:08.761297 | orchestrator | 2026-03-16 00:52:08 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:08.761344 | orchestrator | 2026-03-16 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:11.797451 | orchestrator | 2026-03-16 00:52:11 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:11.799217 | orchestrator | 2026-03-16 00:52:11 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:11.799460 | orchestrator | 2026-03-16 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:14.846203 | orchestrator | 2026-03-16 00:52:14 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:14.848493 | orchestrator | 2026-03-16 00:52:14 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:14.848572 | orchestrator | 2026-03-16 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:17.890485 | orchestrator | 2026-03-16 00:52:17 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:17.891749 | orchestrator | 2026-03-16 00:52:17 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:17.891828 | orchestrator | 2026-03-16 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:20.942992 | orchestrator | 2026-03-16 00:52:20 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:20.946097 | orchestrator | 2026-03-16 00:52:20 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:20.946218 | orchestrator | 2026-03-16 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:23.990826 | orchestrator | 2026-03-16 00:52:23 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:23.991064 | orchestrator | 2026-03-16 00:52:23 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:23.991085 | orchestrator | 2026-03-16 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:27.032422 | orchestrator | 2026-03-16 00:52:27 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:27.032618 | orchestrator | 2026-03-16 00:52:27 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:27.032666 | orchestrator | 2026-03-16 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:30.077525 | orchestrator | 2026-03-16 00:52:30 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:30.077629 | orchestrator | 2026-03-16 00:52:30 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:30.077639 | orchestrator | 2026-03-16 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:33.114976 | orchestrator | 2026-03-16 00:52:33 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:33.115381 | orchestrator | 2026-03-16 00:52:33 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:33.115420 | orchestrator | 2026-03-16 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:36.172347 | orchestrator | 2026-03-16 00:52:36 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:36.173256 | orchestrator | 2026-03-16 00:52:36 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:36.173384 | orchestrator | 2026-03-16 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:39.216803 | orchestrator | 2026-03-16 00:52:39 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:39.218937 | orchestrator | 2026-03-16 00:52:39 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:39.219200 | orchestrator | 2026-03-16 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:42.261185 | orchestrator | 2026-03-16 00:52:42 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:42.261909 | orchestrator | 2026-03-16 00:52:42 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:42.261964 | orchestrator | 2026-03-16 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:45.293533 | orchestrator | 2026-03-16 00:52:45 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:45.296382 | orchestrator | 2026-03-16 00:52:45 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:45.296613 | orchestrator | 2026-03-16 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:48.342732 | orchestrator | 2026-03-16 00:52:48 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:48.342826 | orchestrator | 2026-03-16 00:52:48 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:48.342840 | orchestrator | 2026-03-16 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:51.386666 | orchestrator | 2026-03-16 00:52:51 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:51.388266 | orchestrator | 2026-03-16 00:52:51 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:51.388321 | orchestrator | 2026-03-16 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:54.423402 | orchestrator | 2026-03-16 00:52:54 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:54.423712 | orchestrator | 2026-03-16 00:52:54 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:54.423742 | orchestrator | 2026-03-16 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:52:57.475673 | orchestrator | 2026-03-16 00:52:57 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:52:57.475775 | orchestrator | 2026-03-16 00:52:57 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:52:57.475782 | orchestrator | 2026-03-16 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:00.517917 | orchestrator | 2026-03-16 00:53:00 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:00.519504 | orchestrator | 2026-03-16 00:53:00 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:00.519653 | orchestrator | 2026-03-16 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:03.550985 | orchestrator | 2026-03-16 00:53:03 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:03.552399 | orchestrator | 2026-03-16 00:53:03 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:03.552447 | orchestrator | 2026-03-16 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:06.598648 | orchestrator | 2026-03-16 00:53:06 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:06.602388 | orchestrator | 2026-03-16 00:53:06 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:06.602472 | orchestrator | 2026-03-16 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:09.649493 | orchestrator | 2026-03-16 00:53:09 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:09.650906 | orchestrator | 2026-03-16 00:53:09 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:09.650974 | orchestrator | 2026-03-16 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:12.693295 | orchestrator | 2026-03-16 00:53:12 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:12.693408 | orchestrator | 2026-03-16 00:53:12 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:12.693425 | orchestrator | 2026-03-16 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:15.734108 | orchestrator | 2026-03-16 00:53:15 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:15.734400 | orchestrator | 2026-03-16 00:53:15 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:15.734413 | orchestrator | 2026-03-16 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:18.775612 | orchestrator | 2026-03-16 00:53:18 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:18.780084 | orchestrator | 2026-03-16 00:53:18 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:18.780288 | orchestrator | 2026-03-16 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:21.823544 | orchestrator | 2026-03-16 00:53:21 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:21.824104 | orchestrator | 2026-03-16 00:53:21 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:21.824126 | orchestrator | 2026-03-16 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:24.858284 | orchestrator | 2026-03-16 00:53:24 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:24.858798 | orchestrator | 2026-03-16 00:53:24 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:24.858887 | orchestrator | 2026-03-16 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:27.898156 | orchestrator | 2026-03-16 00:53:27 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:27.898936 | orchestrator | 2026-03-16 00:53:27 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:27.899012 | orchestrator | 2026-03-16 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:30.942804 | orchestrator | 2026-03-16 00:53:30 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:30.944640 | orchestrator | 2026-03-16 00:53:30 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:30.944729 | orchestrator | 2026-03-16 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:33.988606 | orchestrator | 2026-03-16 00:53:33 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:33.990071 | orchestrator | 2026-03-16 00:53:33 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:33.990109 | orchestrator | 2026-03-16 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:37.042547 | orchestrator | 2026-03-16 00:53:37 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:37.043041 | orchestrator | 2026-03-16 00:53:37 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:37.043585 | orchestrator | 2026-03-16 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:40.098486 | orchestrator | 2026-03-16 00:53:40 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:40.099697 | orchestrator | 2026-03-16 00:53:40 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:40.099727 | orchestrator | 2026-03-16 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:43.146312 | orchestrator | 2026-03-16 00:53:43 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:43.147715 | orchestrator | 2026-03-16 00:53:43 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:43.147818 | orchestrator | 2026-03-16 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:46.195648 | orchestrator | 2026-03-16 00:53:46 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:46.196614 | orchestrator | 2026-03-16 00:53:46 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:46.197075 | orchestrator | 2026-03-16 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:49.253002 | orchestrator | 2026-03-16 00:53:49 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:49.254544 | orchestrator | 2026-03-16 00:53:49 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:49.254622 | orchestrator | 2026-03-16 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:52.314385 | orchestrator | 2026-03-16 00:53:52 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:52.315340 | orchestrator | 2026-03-16 00:53:52 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:52.315500 | orchestrator | 2026-03-16 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:55.374948 | orchestrator | 2026-03-16 00:53:55 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:55.376342 | orchestrator | 2026-03-16 00:53:55 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:55.377952 | orchestrator | 2026-03-16 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:53:58.431953 | orchestrator | 2026-03-16 00:53:58 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:53:58.434381 | orchestrator | 2026-03-16 00:53:58 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:53:58.434487 | orchestrator | 2026-03-16 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:01.479347 | orchestrator | 2026-03-16 00:54:01 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:01.480454 | orchestrator | 2026-03-16 00:54:01 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:01.480552 | orchestrator | 2026-03-16 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:04.524875 | orchestrator | 2026-03-16 00:54:04 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:04.525355 | orchestrator | 2026-03-16 00:54:04 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:04.525488 | orchestrator | 2026-03-16 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:07.564230 | orchestrator | 2026-03-16 00:54:07 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:07.564652 | orchestrator | 2026-03-16 00:54:07 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:07.564699 | orchestrator | 2026-03-16 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:10.605033 | orchestrator | 2026-03-16 00:54:10 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:10.607097 | orchestrator | 2026-03-16 00:54:10 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:10.607137 | orchestrator | 2026-03-16 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:13.655689 | orchestrator | 2026-03-16 00:54:13 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:13.656517 | orchestrator | 2026-03-16 00:54:13 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:13.656567 | orchestrator | 2026-03-16 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:16.712499 | orchestrator | 2026-03-16 00:54:16 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:16.713666 | orchestrator | 2026-03-16 00:54:16 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:16.715257 | orchestrator | 2026-03-16 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:19.757418 | orchestrator | 2026-03-16 00:54:19 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:19.757633 | orchestrator | 2026-03-16 00:54:19 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:19.757665 | orchestrator | 2026-03-16 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:22.793129 | orchestrator | 2026-03-16 00:54:22 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:22.793312 | orchestrator | 2026-03-16 00:54:22 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:22.793330 | orchestrator | 2026-03-16 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:25.846395 | orchestrator | 2026-03-16 00:54:25 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:25.847206 | orchestrator | 2026-03-16 00:54:25 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:25.847297 | orchestrator | 2026-03-16 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:28.890266 | orchestrator | 2026-03-16 00:54:28 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:28.890644 | orchestrator | 2026-03-16 00:54:28 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:28.891658 | orchestrator | 2026-03-16 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:31.936703 | orchestrator | 2026-03-16 00:54:31 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:31.937326 | orchestrator | 2026-03-16 00:54:31 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:31.938152 | orchestrator | 2026-03-16 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:34.984450 | orchestrator | 2026-03-16 00:54:34 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:34.984544 | orchestrator | 2026-03-16 00:54:34 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:34.984554 | orchestrator | 2026-03-16 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:38.039784 | orchestrator | 2026-03-16 00:54:38 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:38.042158 | orchestrator | 2026-03-16 00:54:38 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:38.042237 | orchestrator | 2026-03-16 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:41.086840 | orchestrator | 2026-03-16 00:54:41 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:41.087763 | orchestrator | 2026-03-16 00:54:41 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:41.087827 | orchestrator | 2026-03-16 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:44.135228 | orchestrator | 2026-03-16 00:54:44 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:44.137503 | orchestrator | 2026-03-16 00:54:44 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:44.137584 | orchestrator | 2026-03-16 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:47.185090 | orchestrator | 2026-03-16 00:54:47 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:47.188349 | orchestrator | 2026-03-16 00:54:47 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:47.188418 | orchestrator | 2026-03-16 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:50.245350 | orchestrator | 2026-03-16 00:54:50 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:50.246395 | orchestrator | 2026-03-16 00:54:50 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:50.247833 | orchestrator | 2026-03-16 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:53.284071 | orchestrator | 2026-03-16 00:54:53 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:53.285573 | orchestrator | 2026-03-16 00:54:53 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:53.285610 | orchestrator | 2026-03-16 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:56.329891 | orchestrator | 2026-03-16 00:54:56 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:56.332789 | orchestrator | 2026-03-16 00:54:56 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state STARTED 2026-03-16 00:54:56.332844 | orchestrator | 2026-03-16 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:59.385371 | orchestrator | 2026-03-16 00:54:59 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:54:59.385459 | orchestrator | 2026-03-16 00:54:59 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:54:59.392479 | orchestrator | 2026-03-16 00:54:59 | INFO  | Task a03f8dbe-db84-4730-b904-e3ccfb4da227 is in state SUCCESS 2026-03-16 00:54:59.395120 | orchestrator | 2026-03-16 00:54:59.395186 | orchestrator | 2026-03-16 00:54:59.395193 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 00:54:59.395228 | orchestrator | 2026-03-16 00:54:59.395232 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 00:54:59.395237 | orchestrator | Monday 16 March 2026 00:48:32 +0000 (0:00:00.269) 0:00:00.270 ********** 2026-03-16 00:54:59.395242 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.395247 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.395251 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.395255 | orchestrator | 2026-03-16 00:54:59.395259 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 00:54:59.395307 | orchestrator | Monday 16 March 2026 00:48:32 +0000 (0:00:00.509) 0:00:00.780 ********** 2026-03-16 00:54:59.395311 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-16 00:54:59.395316 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-16 00:54:59.395320 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-16 00:54:59.395366 | orchestrator | 2026-03-16 00:54:59.395370 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-16 00:54:59.395374 | orchestrator | 2026-03-16 00:54:59.395378 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-16 00:54:59.395398 | orchestrator | Monday 16 March 2026 00:48:33 +0000 (0:00:00.717) 0:00:01.497 ********** 2026-03-16 00:54:59.395402 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.395406 | orchestrator | 2026-03-16 00:54:59.395410 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-16 00:54:59.395413 | orchestrator | Monday 16 March 2026 00:48:33 +0000 (0:00:00.732) 0:00:02.229 ********** 2026-03-16 00:54:59.395417 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.395421 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.395425 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.395428 | orchestrator | 2026-03-16 00:54:59.395432 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-16 00:54:59.395436 | orchestrator | Monday 16 March 2026 00:48:34 +0000 (0:00:00.776) 0:00:03.006 ********** 2026-03-16 00:54:59.395440 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.395444 | orchestrator | 2026-03-16 00:54:59.395448 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-16 00:54:59.395452 | orchestrator | Monday 16 March 2026 00:48:35 +0000 (0:00:00.895) 0:00:03.901 ********** 2026-03-16 00:54:59.395455 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.395459 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.395463 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.395466 | orchestrator | 2026-03-16 00:54:59.395470 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-16 00:54:59.395474 | orchestrator | Monday 16 March 2026 00:48:36 +0000 (0:00:00.716) 0:00:04.618 ********** 2026-03-16 00:54:59.395478 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-16 00:54:59.395495 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-16 00:54:59.395499 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-16 00:54:59.395502 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-16 00:54:59.395506 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-16 00:54:59.395511 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-16 00:54:59.395515 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-16 00:54:59.395518 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-16 00:54:59.395522 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-16 00:54:59.395526 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-16 00:54:59.395529 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-16 00:54:59.395533 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-16 00:54:59.395537 | orchestrator | 2026-03-16 00:54:59.395545 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-16 00:54:59.395549 | orchestrator | Monday 16 March 2026 00:48:40 +0000 (0:00:04.102) 0:00:08.720 ********** 2026-03-16 00:54:59.395553 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-16 00:54:59.395557 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-16 00:54:59.395561 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-16 00:54:59.395565 | orchestrator | 2026-03-16 00:54:59.395569 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-16 00:54:59.395572 | orchestrator | Monday 16 March 2026 00:48:41 +0000 (0:00:00.795) 0:00:09.516 ********** 2026-03-16 00:54:59.395577 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-16 00:54:59.395581 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-16 00:54:59.395584 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-16 00:54:59.395588 | orchestrator | 2026-03-16 00:54:59.395592 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-16 00:54:59.395596 | orchestrator | Monday 16 March 2026 00:48:42 +0000 (0:00:01.549) 0:00:11.065 ********** 2026-03-16 00:54:59.395600 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-16 00:54:59.395604 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.395619 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-16 00:54:59.395623 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.395627 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-16 00:54:59.395630 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.395634 | orchestrator | 2026-03-16 00:54:59.395638 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-16 00:54:59.395642 | orchestrator | Monday 16 March 2026 00:48:43 +0000 (0:00:00.701) 0:00:11.767 ********** 2026-03-16 00:54:59.395648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.395657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.395664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.395668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.395675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.395683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.395688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.395693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.395700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.395705 | orchestrator | 2026-03-16 00:54:59.395709 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-16 00:54:59.395714 | orchestrator | Monday 16 March 2026 00:48:45 +0000 (0:00:02.412) 0:00:14.180 ********** 2026-03-16 00:54:59.395718 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.395723 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.395727 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.395731 | orchestrator | 2026-03-16 00:54:59.395735 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-16 00:54:59.395739 | orchestrator | Monday 16 March 2026 00:48:47 +0000 (0:00:01.354) 0:00:15.534 ********** 2026-03-16 00:54:59.395744 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-16 00:54:59.395768 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-16 00:54:59.395774 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-16 00:54:59.395779 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-16 00:54:59.395785 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-16 00:54:59.395791 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-16 00:54:59.395797 | orchestrator | 2026-03-16 00:54:59.395803 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-16 00:54:59.395809 | orchestrator | Monday 16 March 2026 00:48:49 +0000 (0:00:01.955) 0:00:17.489 ********** 2026-03-16 00:54:59.395815 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.395821 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.395828 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.395834 | orchestrator | 2026-03-16 00:54:59.395839 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-16 00:54:59.395846 | orchestrator | Monday 16 March 2026 00:48:50 +0000 (0:00:01.168) 0:00:18.658 ********** 2026-03-16 00:54:59.395852 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.395858 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.395864 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.395870 | orchestrator | 2026-03-16 00:54:59.395877 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-16 00:54:59.395884 | orchestrator | Monday 16 March 2026 00:48:54 +0000 (0:00:04.036) 0:00:22.695 ********** 2026-03-16 00:54:59.395894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.395907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.395915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.395921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-16 00:54:59.395926 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.395931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.395936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.395940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.395948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-16 00:54:59.395958 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.395963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.395967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.395972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.395976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-16 00:54:59.395981 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.395985 | orchestrator | 2026-03-16 00:54:59.395989 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-16 00:54:59.395994 | orchestrator | Monday 16 March 2026 00:48:55 +0000 (0:00:00.899) 0:00:23.594 ********** 2026-03-16 00:54:59.396018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.396069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-16 00:54:59.396076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.396091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-16 00:54:59.396096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.396104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7', '__omit_place_holder__7affe166b6287bf63ea1ba7f1aa9176763c6dee7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-16 00:54:59.396108 | orchestrator | 2026-03-16 00:54:59.396111 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-16 00:54:59.396115 | orchestrator | Monday 16 March 2026 00:48:58 +0000 (0:00:02.764) 0:00:26.358 ********** 2026-03-16 00:54:59.396119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.396196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.396223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.396230 | orchestrator | 2026-03-16 00:54:59.396236 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-16 00:54:59.396242 | orchestrator | Monday 16 March 2026 00:49:01 +0000 (0:00:03.624) 0:00:29.982 ********** 2026-03-16 00:54:59.396248 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-16 00:54:59.396259 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-16 00:54:59.396266 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-16 00:54:59.396272 | orchestrator | 2026-03-16 00:54:59.396278 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-16 00:54:59.396285 | orchestrator | Monday 16 March 2026 00:49:05 +0000 (0:00:04.098) 0:00:34.082 ********** 2026-03-16 00:54:59.396291 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-16 00:54:59.396298 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-16 00:54:59.396305 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-16 00:54:59.396311 | orchestrator | 2026-03-16 00:54:59.396318 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-16 00:54:59.396325 | orchestrator | Monday 16 March 2026 00:49:10 +0000 (0:00:04.254) 0:00:38.336 ********** 2026-03-16 00:54:59.396331 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.396338 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.396343 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.396362 | orchestrator | 2026-03-16 00:54:59.396366 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-16 00:54:59.396372 | orchestrator | Monday 16 March 2026 00:49:10 +0000 (0:00:00.707) 0:00:39.044 ********** 2026-03-16 00:54:59.396378 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-16 00:54:59.396462 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-16 00:54:59.396469 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-16 00:54:59.396475 | orchestrator | 2026-03-16 00:54:59.396481 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-16 00:54:59.396487 | orchestrator | Monday 16 March 2026 00:49:13 +0000 (0:00:02.957) 0:00:42.001 ********** 2026-03-16 00:54:59.396493 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-16 00:54:59.396499 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-16 00:54:59.396504 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-16 00:54:59.396517 | orchestrator | 2026-03-16 00:54:59.396523 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-16 00:54:59.396529 | orchestrator | Monday 16 March 2026 00:49:15 +0000 (0:00:02.151) 0:00:44.153 ********** 2026-03-16 00:54:59.396535 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-16 00:54:59.396542 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-16 00:54:59.396548 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-16 00:54:59.396554 | orchestrator | 2026-03-16 00:54:59.396561 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-16 00:54:59.396568 | orchestrator | Monday 16 March 2026 00:49:17 +0000 (0:00:01.835) 0:00:45.989 ********** 2026-03-16 00:54:59.396574 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-16 00:54:59.396580 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-16 00:54:59.396586 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-16 00:54:59.396593 | orchestrator | 2026-03-16 00:54:59.396600 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-16 00:54:59.396606 | orchestrator | Monday 16 March 2026 00:49:19 +0000 (0:00:02.081) 0:00:48.070 ********** 2026-03-16 00:54:59.396612 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.396618 | orchestrator | 2026-03-16 00:54:59.396625 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-16 00:54:59.396631 | orchestrator | Monday 16 March 2026 00:49:20 +0000 (0:00:00.706) 0:00:48.776 ********** 2026-03-16 00:54:59.396645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.396692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.396697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.396705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.396709 | orchestrator | 2026-03-16 00:54:59.396713 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-16 00:54:59.396717 | orchestrator | Monday 16 March 2026 00:49:24 +0000 (0:00:03.811) 0:00:52.588 ********** 2026-03-16 00:54:59.396721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.396728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.396732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.396736 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.396740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.396768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.396777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.396781 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.396785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.396793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.396797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.396801 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.396825 | orchestrator | 2026-03-16 00:54:59.396829 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-16 00:54:59.396833 | orchestrator | Monday 16 March 2026 00:49:25 +0000 (0:00:00.664) 0:00:53.253 ********** 2026-03-16 00:54:59.396837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.396843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.396851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.396855 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.396859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.396866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.396884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.396888 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.396892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.396898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.396902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.396906 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.396910 | orchestrator | 2026-03-16 00:54:59.396932 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-16 00:54:59.396937 | orchestrator | Monday 16 March 2026 00:49:26 +0000 (0:00:01.093) 0:00:54.346 ********** 2026-03-16 00:54:59.396944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.396953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.396960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.396965 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.396994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.397004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.397048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.397055 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.397067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.397079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.397085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.397091 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.397098 | orchestrator | 2026-03-16 00:54:59.397104 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-16 00:54:59.397111 | orchestrator | Monday 16 March 2026 00:49:27 +0000 (0:00:01.250) 0:00:55.597 ********** 2026-03-16 00:54:59.397151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.397158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.397166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.397171 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.397175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.397242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.397247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.397252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398176 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.398180 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.398184 | orchestrator | 2026-03-16 00:54:59.398188 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-16 00:54:59.398197 | orchestrator | Monday 16 March 2026 00:49:27 +0000 (0:00:00.528) 0:00:56.125 ********** 2026-03-16 00:54:59.398202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398221 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.398225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398257 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.398261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398264 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.398268 | orchestrator | 2026-03-16 00:54:59.398272 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-16 00:54:59.398276 | orchestrator | Monday 16 March 2026 00:49:28 +0000 (0:00:01.003) 0:00:57.129 ********** 2026-03-16 00:54:59.398280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398324 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.398330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398346 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.398362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398375 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.398378 | orchestrator | 2026-03-16 00:54:59.398385 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-16 00:54:59.398389 | orchestrator | Monday 16 March 2026 00:49:29 +0000 (0:00:00.854) 0:00:57.983 ********** 2026-03-16 00:54:59.398393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398423 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.398427 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.398434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398451 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.398455 | orchestrator | 2026-03-16 00:54:59.398459 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-16 00:54:59.398462 | orchestrator | Monday 16 March 2026 00:49:30 +0000 (0:00:00.875) 0:00:58.859 ********** 2026-03-16 00:54:59.398466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398478 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.398484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398503 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.398507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-16 00:54:59.398511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-16 00:54:59.398515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-16 00:54:59.398519 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.398523 | orchestrator | 2026-03-16 00:54:59.398527 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-16 00:54:59.398530 | orchestrator | Monday 16 March 2026 00:49:31 +0000 (0:00:00.713) 0:00:59.573 ********** 2026-03-16 00:54:59.398537 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-16 00:54:59.398542 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-16 00:54:59.398546 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-16 00:54:59.398549 | orchestrator | 2026-03-16 00:54:59.398553 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-16 00:54:59.398557 | orchestrator | Monday 16 March 2026 00:49:33 +0000 (0:00:01.920) 0:01:01.493 ********** 2026-03-16 00:54:59.398561 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-16 00:54:59.398566 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-16 00:54:59.398570 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-16 00:54:59.398574 | orchestrator | 2026-03-16 00:54:59.398578 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-16 00:54:59.398582 | orchestrator | Monday 16 March 2026 00:49:34 +0000 (0:00:01.503) 0:01:02.997 ********** 2026-03-16 00:54:59.398585 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-16 00:54:59.398589 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-16 00:54:59.398593 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-16 00:54:59.398597 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.398601 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-16 00:54:59.398607 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.398611 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-16 00:54:59.398614 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-16 00:54:59.398618 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.398622 | orchestrator | 2026-03-16 00:54:59.398626 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-16 00:54:59.398629 | orchestrator | Monday 16 March 2026 00:49:35 +0000 (0:00:01.134) 0:01:04.132 ********** 2026-03-16 00:54:59.398633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.398638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.398642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-16 00:54:59.398648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.398655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.398660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-16 00:54:59.398665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.398669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.398688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-16 00:54:59.398696 | orchestrator | 2026-03-16 00:54:59.398701 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-16 00:54:59.398705 | orchestrator | Monday 16 March 2026 00:49:38 +0000 (0:00:03.056) 0:01:07.188 ********** 2026-03-16 00:54:59.398710 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.398727 | orchestrator | 2026-03-16 00:54:59.398732 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-16 00:54:59.398737 | orchestrator | Monday 16 March 2026 00:49:39 +0000 (0:00:00.611) 0:01:07.799 ********** 2026-03-16 00:54:59.398767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-16 00:54:59.398777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.398788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-16 00:54:59.398813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.398821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-16 00:54:59.398849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.398855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398867 | orchestrator | 2026-03-16 00:54:59.398871 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-16 00:54:59.398876 | orchestrator | Monday 16 March 2026 00:49:43 +0000 (0:00:03.442) 0:01:11.242 ********** 2026-03-16 00:54:59.398880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-16 00:54:59.398888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.398895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398904 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.398909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-16 00:54:59.398916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.398921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398933 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.398939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-16 00:54:59.398944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.398951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.398960 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.398965 | orchestrator | 2026-03-16 00:54:59.398969 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-16 00:54:59.398973 | orchestrator | Monday 16 March 2026 00:49:44 +0000 (0:00:01.102) 0:01:12.344 ********** 2026-03-16 00:54:59.398978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-16 00:54:59.398984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-16 00:54:59.398988 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.398993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-16 00:54:59.398997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-16 00:54:59.399002 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.399006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-16 00:54:59.399013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-16 00:54:59.399017 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.399022 | orchestrator | 2026-03-16 00:54:59.399026 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-16 00:54:59.399031 | orchestrator | Monday 16 March 2026 00:49:44 +0000 (0:00:00.778) 0:01:13.122 ********** 2026-03-16 00:54:59.399035 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.399039 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.399043 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.399048 | orchestrator | 2026-03-16 00:54:59.399052 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-16 00:54:59.399056 | orchestrator | Monday 16 March 2026 00:49:46 +0000 (0:00:01.346) 0:01:14.469 ********** 2026-03-16 00:54:59.399060 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.399065 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.399073 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.399077 | orchestrator | 2026-03-16 00:54:59.399082 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-16 00:54:59.399090 | orchestrator | Monday 16 March 2026 00:49:48 +0000 (0:00:01.997) 0:01:16.467 ********** 2026-03-16 00:54:59.399094 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.399098 | orchestrator | 2026-03-16 00:54:59.399103 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-16 00:54:59.399107 | orchestrator | Monday 16 March 2026 00:49:49 +0000 (0:00:01.044) 0:01:17.511 ********** 2026-03-16 00:54:59.399112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.399117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.399130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.399153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399161 | orchestrator | 2026-03-16 00:54:59.399165 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-16 00:54:59.399169 | orchestrator | Monday 16 March 2026 00:49:52 +0000 (0:00:03.480) 0:01:20.991 ********** 2026-03-16 00:54:59.399175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.399185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399193 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.399197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.399201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.399219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399223 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.399227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399234 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.399238 | orchestrator | 2026-03-16 00:54:59.399242 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-16 00:54:59.399246 | orchestrator | Monday 16 March 2026 00:49:53 +0000 (0:00:00.623) 0:01:21.615 ********** 2026-03-16 00:54:59.399250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-16 00:54:59.399254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-16 00:54:59.399259 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.399262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-16 00:54:59.399266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-16 00:54:59.399270 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.399278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-16 00:54:59.399282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-16 00:54:59.399285 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.399289 | orchestrator | 2026-03-16 00:54:59.399525 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-16 00:54:59.399535 | orchestrator | Monday 16 March 2026 00:49:54 +0000 (0:00:00.966) 0:01:22.581 ********** 2026-03-16 00:54:59.399538 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.399542 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.399546 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.399550 | orchestrator | 2026-03-16 00:54:59.399553 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-16 00:54:59.399557 | orchestrator | Monday 16 March 2026 00:49:55 +0000 (0:00:01.405) 0:01:23.987 ********** 2026-03-16 00:54:59.399561 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.399565 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.399569 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.399572 | orchestrator | 2026-03-16 00:54:59.399576 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-16 00:54:59.399580 | orchestrator | Monday 16 March 2026 00:49:57 +0000 (0:00:01.942) 0:01:25.930 ********** 2026-03-16 00:54:59.399584 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.399587 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.399591 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.399595 | orchestrator | 2026-03-16 00:54:59.399602 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-16 00:54:59.399606 | orchestrator | Monday 16 March 2026 00:49:57 +0000 (0:00:00.269) 0:01:26.199 ********** 2026-03-16 00:54:59.399610 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.399614 | orchestrator | 2026-03-16 00:54:59.399618 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-16 00:54:59.399621 | orchestrator | Monday 16 March 2026 00:49:58 +0000 (0:00:00.775) 0:01:26.975 ********** 2026-03-16 00:54:59.399626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-16 00:54:59.399631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-16 00:54:59.399641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-16 00:54:59.399645 | orchestrator | 2026-03-16 00:54:59.399649 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-16 00:54:59.399653 | orchestrator | Monday 16 March 2026 00:50:01 +0000 (0:00:02.803) 0:01:29.779 ********** 2026-03-16 00:54:59.399660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-16 00:54:59.399664 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.399670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-16 00:54:59.399675 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.399678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-16 00:54:59.399682 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.399686 | orchestrator | 2026-03-16 00:54:59.399694 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-16 00:54:59.399698 | orchestrator | Monday 16 March 2026 00:50:03 +0000 (0:00:01.540) 0:01:31.320 ********** 2026-03-16 00:54:59.399707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-16 00:54:59.399712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-16 00:54:59.399716 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.399720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-16 00:54:59.399726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-16 00:54:59.399731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-16 00:54:59.399734 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.399741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-16 00:54:59.399763 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.399768 | orchestrator | 2026-03-16 00:54:59.399772 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-16 00:54:59.399775 | orchestrator | Monday 16 March 2026 00:50:04 +0000 (0:00:01.626) 0:01:32.947 ********** 2026-03-16 00:54:59.399779 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.399783 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.399787 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.399790 | orchestrator | 2026-03-16 00:54:59.399794 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-16 00:54:59.399798 | orchestrator | Monday 16 March 2026 00:50:05 +0000 (0:00:00.599) 0:01:33.547 ********** 2026-03-16 00:54:59.399802 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.399805 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.399809 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.399813 | orchestrator | 2026-03-16 00:54:59.399816 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-16 00:54:59.399823 | orchestrator | Monday 16 March 2026 00:50:06 +0000 (0:00:01.190) 0:01:34.737 ********** 2026-03-16 00:54:59.399827 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.399831 | orchestrator | 2026-03-16 00:54:59.399835 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-16 00:54:59.399839 | orchestrator | Monday 16 March 2026 00:50:07 +0000 (0:00:00.681) 0:01:35.418 ********** 2026-03-16 00:54:59.399843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.399847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.399875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.399879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399913 | orchestrator | 2026-03-16 00:54:59.399917 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-16 00:54:59.399921 | orchestrator | Monday 16 March 2026 00:50:10 +0000 (0:00:03.415) 0:01:38.834 ********** 2026-03-16 00:54:59.399925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.399932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399950 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.399954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.399958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.399972 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.399978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.400012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400025 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.400029 | orchestrator | 2026-03-16 00:54:59.400033 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-16 00:54:59.400037 | orchestrator | Monday 16 March 2026 00:50:11 +0000 (0:00:00.925) 0:01:39.759 ********** 2026-03-16 00:54:59.400041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-16 00:54:59.400047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-16 00:54:59.400051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-16 00:54:59.400072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-16 00:54:59.400080 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.400087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-16 00:54:59.400090 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.400094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-16 00:54:59.400098 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.400102 | orchestrator | 2026-03-16 00:54:59.400106 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-16 00:54:59.400110 | orchestrator | Monday 16 March 2026 00:50:12 +0000 (0:00:01.152) 0:01:40.912 ********** 2026-03-16 00:54:59.400114 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.400118 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.400121 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.400125 | orchestrator | 2026-03-16 00:54:59.400129 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-16 00:54:59.400133 | orchestrator | Monday 16 March 2026 00:50:14 +0000 (0:00:01.440) 0:01:42.353 ********** 2026-03-16 00:54:59.400137 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.400140 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.400144 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.400148 | orchestrator | 2026-03-16 00:54:59.400152 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-16 00:54:59.400186 | orchestrator | Monday 16 March 2026 00:50:16 +0000 (0:00:02.414) 0:01:44.768 ********** 2026-03-16 00:54:59.400191 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.400198 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.400204 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.400210 | orchestrator | 2026-03-16 00:54:59.400216 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-16 00:54:59.400222 | orchestrator | Monday 16 March 2026 00:50:17 +0000 (0:00:00.633) 0:01:45.401 ********** 2026-03-16 00:54:59.400245 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.400252 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.400258 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.400263 | orchestrator | 2026-03-16 00:54:59.400269 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-16 00:54:59.400276 | orchestrator | Monday 16 March 2026 00:50:17 +0000 (0:00:00.409) 0:01:45.811 ********** 2026-03-16 00:54:59.400282 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.400288 | orchestrator | 2026-03-16 00:54:59.400294 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-16 00:54:59.400300 | orchestrator | Monday 16 March 2026 00:50:18 +0000 (0:00:01.167) 0:01:46.978 ********** 2026-03-16 00:54:59.400307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 00:54:59.400324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 00:54:59.400331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 00:54:59.400383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 00:54:59.400391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 00:54:59.400433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 00:54:59.400451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400575 | orchestrator | 2026-03-16 00:54:59.400580 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-16 00:54:59.400584 | orchestrator | Monday 16 March 2026 00:50:23 +0000 (0:00:04.753) 0:01:51.732 ********** 2026-03-16 00:54:59.400589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 00:54:59.400593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 00:54:59.400597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 00:54:59.400616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 00:54:59.400635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 00:54:59.400640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 00:54:59.400662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400700 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.400704 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.400711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.400729 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.400734 | orchestrator | 2026-03-16 00:54:59.400737 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-16 00:54:59.400741 | orchestrator | Monday 16 March 2026 00:50:24 +0000 (0:00:00.895) 0:01:52.627 ********** 2026-03-16 00:54:59.400761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-16 00:54:59.400769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-16 00:54:59.400773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-16 00:54:59.400794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-16 00:54:59.400798 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.400802 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.400806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-16 00:54:59.400810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-16 00:54:59.400813 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.400817 | orchestrator | 2026-03-16 00:54:59.400821 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-16 00:54:59.400837 | orchestrator | Monday 16 March 2026 00:50:25 +0000 (0:00:00.949) 0:01:53.577 ********** 2026-03-16 00:54:59.400841 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.400844 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.400848 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.400868 | orchestrator | 2026-03-16 00:54:59.400873 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-16 00:54:59.400877 | orchestrator | Monday 16 March 2026 00:50:27 +0000 (0:00:01.687) 0:01:55.264 ********** 2026-03-16 00:54:59.400880 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.400884 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.400888 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.400892 | orchestrator | 2026-03-16 00:54:59.400896 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-16 00:54:59.400899 | orchestrator | Monday 16 March 2026 00:50:29 +0000 (0:00:02.297) 0:01:57.562 ********** 2026-03-16 00:54:59.400906 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.400910 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.400914 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.400918 | orchestrator | 2026-03-16 00:54:59.400921 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-16 00:54:59.400925 | orchestrator | Monday 16 March 2026 00:50:29 +0000 (0:00:00.440) 0:01:58.003 ********** 2026-03-16 00:54:59.400929 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.400933 | orchestrator | 2026-03-16 00:54:59.400937 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-16 00:54:59.400941 | orchestrator | Monday 16 March 2026 00:50:30 +0000 (0:00:00.703) 0:01:58.707 ********** 2026-03-16 00:54:59.400949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 00:54:59.400958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.401342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 00:54:59.401370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.401403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 00:54:59.401411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.401416 | orchestrator | 2026-03-16 00:54:59.401420 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-16 00:54:59.401424 | orchestrator | Monday 16 March 2026 00:50:35 +0000 (0:00:04.711) 0:02:03.418 ********** 2026-03-16 00:54:59.401437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-16 00:54:59.401445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.401450 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.401456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-16 00:54:59.401463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.401471 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.401475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-16 00:54:59.401499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.401516 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.401520 | orchestrator | 2026-03-16 00:54:59.401524 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-16 00:54:59.401533 | orchestrator | Monday 16 March 2026 00:50:38 +0000 (0:00:03.249) 0:02:06.668 ********** 2026-03-16 00:54:59.401537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-16 00:54:59.401542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-16 00:54:59.401546 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.401550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-16 00:54:59.401569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-16 00:54:59.401574 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.401578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-16 00:54:59.401588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-16 00:54:59.401592 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.401600 | orchestrator | 2026-03-16 00:54:59.401604 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-16 00:54:59.401609 | orchestrator | Monday 16 March 2026 00:50:43 +0000 (0:00:05.099) 0:02:11.767 ********** 2026-03-16 00:54:59.401615 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.401621 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.401627 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.401632 | orchestrator | 2026-03-16 00:54:59.401638 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-16 00:54:59.401644 | orchestrator | Monday 16 March 2026 00:50:44 +0000 (0:00:01.325) 0:02:13.093 ********** 2026-03-16 00:54:59.401650 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.401656 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.401662 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.401667 | orchestrator | 2026-03-16 00:54:59.401673 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-16 00:54:59.401679 | orchestrator | Monday 16 March 2026 00:50:47 +0000 (0:00:02.296) 0:02:15.389 ********** 2026-03-16 00:54:59.401691 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.401697 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.401703 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.401718 | orchestrator | 2026-03-16 00:54:59.401724 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-16 00:54:59.401728 | orchestrator | Monday 16 March 2026 00:50:47 +0000 (0:00:00.716) 0:02:16.106 ********** 2026-03-16 00:54:59.401731 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.401735 | orchestrator | 2026-03-16 00:54:59.401739 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-16 00:54:59.401743 | orchestrator | Monday 16 March 2026 00:50:48 +0000 (0:00:01.013) 0:02:17.119 ********** 2026-03-16 00:54:59.401777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 00:54:59.401782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 00:54:59.401796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 00:54:59.401801 | orchestrator | 2026-03-16 00:54:59.401815 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-16 00:54:59.401819 | orchestrator | Monday 16 March 2026 00:50:52 +0000 (0:00:03.973) 0:02:21.092 ********** 2026-03-16 00:54:59.401826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-16 00:54:59.401830 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.401834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-16 00:54:59.401838 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.401842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-16 00:54:59.401847 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.401850 | orchestrator | 2026-03-16 00:54:59.401854 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-16 00:54:59.401858 | orchestrator | Monday 16 March 2026 00:50:53 +0000 (0:00:00.918) 0:02:22.011 ********** 2026-03-16 00:54:59.401862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-16 00:54:59.401871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-16 00:54:59.401875 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.401879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-16 00:54:59.401883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-16 00:54:59.401887 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.401890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-16 00:54:59.401897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-16 00:54:59.401901 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.401905 | orchestrator | 2026-03-16 00:54:59.401908 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-16 00:54:59.401912 | orchestrator | Monday 16 March 2026 00:50:54 +0000 (0:00:00.766) 0:02:22.778 ********** 2026-03-16 00:54:59.401916 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.401921 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.401927 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.401933 | orchestrator | 2026-03-16 00:54:59.401938 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-16 00:54:59.401945 | orchestrator | Monday 16 March 2026 00:50:55 +0000 (0:00:01.415) 0:02:24.193 ********** 2026-03-16 00:54:59.401951 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.401958 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.401964 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.401970 | orchestrator | 2026-03-16 00:54:59.401977 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-16 00:54:59.401987 | orchestrator | Monday 16 March 2026 00:50:58 +0000 (0:00:02.280) 0:02:26.474 ********** 2026-03-16 00:54:59.401992 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.401996 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.402000 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.402011 | orchestrator | 2026-03-16 00:54:59.402049 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-16 00:54:59.402053 | orchestrator | Monday 16 March 2026 00:50:58 +0000 (0:00:00.496) 0:02:26.970 ********** 2026-03-16 00:54:59.402057 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.402062 | orchestrator | 2026-03-16 00:54:59.402066 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-16 00:54:59.402071 | orchestrator | Monday 16 March 2026 00:50:59 +0000 (0:00:00.812) 0:02:27.783 ********** 2026-03-16 00:54:59.402077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:54:59.402096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:54:59.402102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:54:59.402110 | orchestrator | 2026-03-16 00:54:59.402114 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-16 00:54:59.402119 | orchestrator | Monday 16 March 2026 00:51:03 +0000 (0:00:03.699) 0:02:31.482 ********** 2026-03-16 00:54:59.402132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-16 00:54:59.402142 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.402150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-16 00:54:59.402155 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.402163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-16 00:54:59.402172 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.402177 | orchestrator | 2026-03-16 00:54:59.402181 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-16 00:54:59.402186 | orchestrator | Monday 16 March 2026 00:51:04 +0000 (0:00:00.824) 0:02:32.307 ********** 2026-03-16 00:54:59.402191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-16 00:54:59.402198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-16 00:54:59.402203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-16 00:54:59.402209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-16 00:54:59.402217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-16 00:54:59.402222 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.402227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-16 00:54:59.402231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-16 00:54:59.402238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-16 00:54:59.402243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-16 00:54:59.402247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-16 00:54:59.402255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-16 00:54:59.402260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-16 00:54:59.402264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-16 00:54:59.402269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-16 00:54:59.402273 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.402278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-16 00:54:59.402282 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.402286 | orchestrator | 2026-03-16 00:54:59.402291 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-16 00:54:59.402295 | orchestrator | Monday 16 March 2026 00:51:05 +0000 (0:00:01.123) 0:02:33.430 ********** 2026-03-16 00:54:59.402299 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.402304 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.402308 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.402313 | orchestrator | 2026-03-16 00:54:59.402317 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-16 00:54:59.402321 | orchestrator | Monday 16 March 2026 00:51:06 +0000 (0:00:01.267) 0:02:34.698 ********** 2026-03-16 00:54:59.402326 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.402330 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.402341 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.402346 | orchestrator | 2026-03-16 00:54:59.402350 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-16 00:54:59.402354 | orchestrator | Monday 16 March 2026 00:51:08 +0000 (0:00:02.213) 0:02:36.911 ********** 2026-03-16 00:54:59.402359 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.402363 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.402367 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.402372 | orchestrator | 2026-03-16 00:54:59.402376 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-16 00:54:59.402383 | orchestrator | Monday 16 March 2026 00:51:08 +0000 (0:00:00.306) 0:02:37.218 ********** 2026-03-16 00:54:59.402387 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.402392 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.402396 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.402400 | orchestrator | 2026-03-16 00:54:59.402404 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-16 00:54:59.402409 | orchestrator | Monday 16 March 2026 00:51:09 +0000 (0:00:00.525) 0:02:37.743 ********** 2026-03-16 00:54:59.402413 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.402420 | orchestrator | 2026-03-16 00:54:59.402425 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-16 00:54:59.402429 | orchestrator | Monday 16 March 2026 00:51:10 +0000 (0:00:00.878) 0:02:38.621 ********** 2026-03-16 00:54:59.402436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 00:54:59.402442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 00:54:59.402447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 00:54:59.402452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 00:54:59.402466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 00:54:59.402478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 00:54:59.402484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 00:54:59.402488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 00:54:59.402493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 00:54:59.402497 | orchestrator | 2026-03-16 00:54:59.402502 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-16 00:54:59.402506 | orchestrator | Monday 16 March 2026 00:51:14 +0000 (0:00:03.640) 0:02:42.262 ********** 2026-03-16 00:54:59.402519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 00:54:59.402531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 00:54:59.402536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 00:54:59.402540 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.402545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 00:54:59.402550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 00:54:59.402555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 00:54:59.402567 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.402589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 00:54:59.402595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 00:54:59.402600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 00:54:59.402605 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.402609 | orchestrator | 2026-03-16 00:54:59.402614 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-16 00:54:59.402618 | orchestrator | Monday 16 March 2026 00:51:14 +0000 (0:00:00.545) 0:02:42.807 ********** 2026-03-16 00:54:59.402623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-16 00:54:59.402629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-16 00:54:59.402633 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.402638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-16 00:54:59.402642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-16 00:54:59.402651 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.402655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-16 00:54:59.402663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-16 00:54:59.402667 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.402672 | orchestrator | 2026-03-16 00:54:59.402676 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-16 00:54:59.402680 | orchestrator | Monday 16 March 2026 00:51:15 +0000 (0:00:00.750) 0:02:43.557 ********** 2026-03-16 00:54:59.402684 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.402689 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.402693 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.402697 | orchestrator | 2026-03-16 00:54:59.402701 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-16 00:54:59.402706 | orchestrator | Monday 16 March 2026 00:51:16 +0000 (0:00:01.348) 0:02:44.906 ********** 2026-03-16 00:54:59.402710 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.402715 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.402719 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.402723 | orchestrator | 2026-03-16 00:54:59.402728 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-16 00:54:59.402732 | orchestrator | Monday 16 March 2026 00:51:18 +0000 (0:00:02.090) 0:02:46.997 ********** 2026-03-16 00:54:59.402736 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.402740 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.402766 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.402773 | orchestrator | 2026-03-16 00:54:59.402779 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-16 00:54:59.402783 | orchestrator | Monday 16 March 2026 00:51:19 +0000 (0:00:00.485) 0:02:47.482 ********** 2026-03-16 00:54:59.402787 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.402792 | orchestrator | 2026-03-16 00:54:59.402796 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-16 00:54:59.402800 | orchestrator | Monday 16 March 2026 00:51:20 +0000 (0:00:00.931) 0:02:48.414 ********** 2026-03-16 00:54:59.402806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 00:54:59.402811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.402821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 00:54:59.402835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.402843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 00:54:59.402848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.402852 | orchestrator | 2026-03-16 00:54:59.402857 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-16 00:54:59.402861 | orchestrator | Monday 16 March 2026 00:51:23 +0000 (0:00:03.280) 0:02:51.695 ********** 2026-03-16 00:54:59.402870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 00:54:59.402874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.402879 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.402894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 00:54:59.402899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.402903 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.402908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 00:54:59.402916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.402920 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.402925 | orchestrator | 2026-03-16 00:54:59.402929 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-16 00:54:59.402934 | orchestrator | Monday 16 March 2026 00:51:24 +0000 (0:00:01.170) 0:02:52.865 ********** 2026-03-16 00:54:59.402939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-16 00:54:59.402944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-16 00:54:59.402948 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.402960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-16 00:54:59.403028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-16 00:54:59.403037 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.403043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-16 00:54:59.403050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-16 00:54:59.403068 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.403075 | orchestrator | 2026-03-16 00:54:59.403080 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-16 00:54:59.403085 | orchestrator | Monday 16 March 2026 00:51:25 +0000 (0:00:00.825) 0:02:53.690 ********** 2026-03-16 00:54:59.403089 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.403094 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.403098 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.403102 | orchestrator | 2026-03-16 00:54:59.403107 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-16 00:54:59.403111 | orchestrator | Monday 16 March 2026 00:51:26 +0000 (0:00:01.301) 0:02:54.991 ********** 2026-03-16 00:54:59.403116 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.403120 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.403124 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.403128 | orchestrator | 2026-03-16 00:54:59.403133 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-16 00:54:59.403142 | orchestrator | Monday 16 March 2026 00:51:28 +0000 (0:00:02.226) 0:02:57.218 ********** 2026-03-16 00:54:59.403146 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.403151 | orchestrator | 2026-03-16 00:54:59.403155 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-16 00:54:59.403159 | orchestrator | Monday 16 March 2026 00:51:30 +0000 (0:00:01.573) 0:02:58.792 ********** 2026-03-16 00:54:59.403183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-16 00:54:59.403192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-16 00:54:59.403244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-16 00:54:59.403281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403356 | orchestrator | 2026-03-16 00:54:59.403361 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-16 00:54:59.403365 | orchestrator | Monday 16 March 2026 00:51:34 +0000 (0:00:03.946) 0:03:02.738 ********** 2026-03-16 00:54:59.403370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-16 00:54:59.403375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403397 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.403404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-16 00:54:59.403409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403423 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.403430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-16 00:54:59.403435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.403458 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.403462 | orchestrator | 2026-03-16 00:54:59.403467 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-16 00:54:59.403471 | orchestrator | Monday 16 March 2026 00:51:35 +0000 (0:00:00.599) 0:03:03.338 ********** 2026-03-16 00:54:59.403475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-16 00:54:59.403480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-16 00:54:59.403484 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.403489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-16 00:54:59.403494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-16 00:54:59.403498 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.403503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-16 00:54:59.403507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-16 00:54:59.403512 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.403516 | orchestrator | 2026-03-16 00:54:59.403520 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-16 00:54:59.403525 | orchestrator | Monday 16 March 2026 00:51:36 +0000 (0:00:01.054) 0:03:04.393 ********** 2026-03-16 00:54:59.403529 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.403533 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.403538 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.403546 | orchestrator | 2026-03-16 00:54:59.403550 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-16 00:54:59.403555 | orchestrator | Monday 16 March 2026 00:51:37 +0000 (0:00:01.328) 0:03:05.721 ********** 2026-03-16 00:54:59.403562 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.403567 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.403571 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.403575 | orchestrator | 2026-03-16 00:54:59.403580 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-16 00:54:59.403584 | orchestrator | Monday 16 March 2026 00:51:39 +0000 (0:00:02.046) 0:03:07.767 ********** 2026-03-16 00:54:59.403589 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.403593 | orchestrator | 2026-03-16 00:54:59.403598 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-16 00:54:59.403602 | orchestrator | Monday 16 March 2026 00:51:40 +0000 (0:00:01.454) 0:03:09.222 ********** 2026-03-16 00:54:59.403607 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-16 00:54:59.403611 | orchestrator | 2026-03-16 00:54:59.403616 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-16 00:54:59.403639 | orchestrator | Monday 16 March 2026 00:51:44 +0000 (0:00:03.150) 0:03:12.373 ********** 2026-03-16 00:54:59.403649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:54:59.403654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-16 00:54:59.403660 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.403673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:54:59.403770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-16 00:54:59.403779 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.403787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:54:59.403799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-16 00:54:59.403812 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.403818 | orchestrator | 2026-03-16 00:54:59.403825 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-16 00:54:59.403832 | orchestrator | Monday 16 March 2026 00:51:46 +0000 (0:00:02.291) 0:03:14.665 ********** 2026-03-16 00:54:59.403845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:54:59.403852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-16 00:54:59.403859 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.403869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:54:59.403882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-16 00:54:59.403887 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.403892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:54:59.403901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-16 00:54:59.403906 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.403910 | orchestrator | 2026-03-16 00:54:59.403915 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-16 00:54:59.403919 | orchestrator | Monday 16 March 2026 00:51:48 +0000 (0:00:02.440) 0:03:17.105 ********** 2026-03-16 00:54:59.403927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-16 00:54:59.403933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-16 00:54:59.403940 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.403945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-16 00:54:59.403970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-16 00:54:59.403976 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.403980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-16 00:54:59.403990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-16 00:54:59.403997 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.404003 | orchestrator | 2026-03-16 00:54:59.404009 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-16 00:54:59.404015 | orchestrator | Monday 16 March 2026 00:51:52 +0000 (0:00:03.168) 0:03:20.274 ********** 2026-03-16 00:54:59.404022 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.404028 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.404085 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.404092 | orchestrator | 2026-03-16 00:54:59.404098 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-16 00:54:59.404105 | orchestrator | Monday 16 March 2026 00:51:53 +0000 (0:00:01.784) 0:03:22.059 ********** 2026-03-16 00:54:59.404111 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.404117 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.404123 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.404130 | orchestrator | 2026-03-16 00:54:59.404135 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-16 00:54:59.404141 | orchestrator | Monday 16 March 2026 00:51:55 +0000 (0:00:01.538) 0:03:23.597 ********** 2026-03-16 00:54:59.404146 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.404161 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.404168 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.404173 | orchestrator | 2026-03-16 00:54:59.404179 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-16 00:54:59.404185 | orchestrator | Monday 16 March 2026 00:51:55 +0000 (0:00:00.330) 0:03:23.927 ********** 2026-03-16 00:54:59.404191 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.404198 | orchestrator | 2026-03-16 00:54:59.404205 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-16 00:54:59.404211 | orchestrator | Monday 16 March 2026 00:51:57 +0000 (0:00:01.443) 0:03:25.371 ********** 2026-03-16 00:54:59.404224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-16 00:54:59.404233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-16 00:54:59.404246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-16 00:54:59.404251 | orchestrator | 2026-03-16 00:54:59.404256 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-16 00:54:59.404260 | orchestrator | Monday 16 March 2026 00:51:58 +0000 (0:00:01.655) 0:03:27.027 ********** 2026-03-16 00:54:59.404265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-16 00:54:59.404269 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.404283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-16 00:54:59.404288 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.404296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-16 00:54:59.404300 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.404305 | orchestrator | 2026-03-16 00:54:59.404309 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-16 00:54:59.404320 | orchestrator | Monday 16 March 2026 00:51:59 +0000 (0:00:00.417) 0:03:27.445 ********** 2026-03-16 00:54:59.404325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-16 00:54:59.404331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-16 00:54:59.404336 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.404340 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.404345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-16 00:54:59.404349 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.404353 | orchestrator | 2026-03-16 00:54:59.404358 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-16 00:54:59.404362 | orchestrator | Monday 16 March 2026 00:52:00 +0000 (0:00:00.918) 0:03:28.363 ********** 2026-03-16 00:54:59.404366 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.404371 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.404375 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.404379 | orchestrator | 2026-03-16 00:54:59.404383 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-16 00:54:59.404388 | orchestrator | Monday 16 March 2026 00:52:00 +0000 (0:00:00.548) 0:03:28.911 ********** 2026-03-16 00:54:59.404392 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.404396 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.404401 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.404405 | orchestrator | 2026-03-16 00:54:59.404409 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-16 00:54:59.404415 | orchestrator | Monday 16 March 2026 00:52:02 +0000 (0:00:01.362) 0:03:30.273 ********** 2026-03-16 00:54:59.404422 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.404498 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.404505 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.404509 | orchestrator | 2026-03-16 00:54:59.404513 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-16 00:54:59.404518 | orchestrator | Monday 16 March 2026 00:52:02 +0000 (0:00:00.325) 0:03:30.599 ********** 2026-03-16 00:54:59.404522 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.404527 | orchestrator | 2026-03-16 00:54:59.404531 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-16 00:54:59.404535 | orchestrator | Monday 16 March 2026 00:52:03 +0000 (0:00:01.516) 0:03:32.115 ********** 2026-03-16 00:54:59.404580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 00:54:59.404601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-16 00:54:59.404650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.404698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.404732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 00:54:59.404736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.404783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-16 00:54:59.404822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 00:54:59.404866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.404870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-16 00:54:59.404917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.404959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.404968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.404996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.405010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.405018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.405039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.405045 | orchestrator | 2026-03-16 00:54:59.405052 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-16 00:54:59.405058 | orchestrator | Monday 16 March 2026 00:52:08 +0000 (0:00:04.451) 0:03:36.567 ********** 2026-03-16 00:54:59.405068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 00:54:59.405074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 00:54:59.405120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-16 00:54:59.405127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-16 00:54:59.405189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.405213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.405261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 00:54:59.405285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.405298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.405383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405401 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.405406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.405482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-16 00:54:59.405486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.405497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405504 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.405577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.405635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-16 00:54:59.405662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.405678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-16 00:54:59.405689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-16 00:54:59.405696 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.405702 | orchestrator | 2026-03-16 00:54:59.405708 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-16 00:54:59.405719 | orchestrator | Monday 16 March 2026 00:52:09 +0000 (0:00:01.524) 0:03:38.091 ********** 2026-03-16 00:54:59.405728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-16 00:54:59.405736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-16 00:54:59.405796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-16 00:54:59.405805 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.405812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-16 00:54:59.405819 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.405826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-16 00:54:59.405833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-16 00:54:59.405840 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.405846 | orchestrator | 2026-03-16 00:54:59.405853 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-16 00:54:59.405859 | orchestrator | Monday 16 March 2026 00:52:11 +0000 (0:00:02.113) 0:03:40.205 ********** 2026-03-16 00:54:59.405866 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.405871 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.405875 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.405879 | orchestrator | 2026-03-16 00:54:59.405883 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-16 00:54:59.405886 | orchestrator | Monday 16 March 2026 00:52:13 +0000 (0:00:01.375) 0:03:41.581 ********** 2026-03-16 00:54:59.405890 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.405894 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.405897 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.405901 | orchestrator | 2026-03-16 00:54:59.405905 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-16 00:54:59.405908 | orchestrator | Monday 16 March 2026 00:52:15 +0000 (0:00:02.331) 0:03:43.912 ********** 2026-03-16 00:54:59.405912 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.405916 | orchestrator | 2026-03-16 00:54:59.405920 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-16 00:54:59.405923 | orchestrator | Monday 16 March 2026 00:52:16 +0000 (0:00:01.282) 0:03:45.194 ********** 2026-03-16 00:54:59.405939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.405950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.405959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.405963 | orchestrator | 2026-03-16 00:54:59.405967 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-16 00:54:59.405971 | orchestrator | Monday 16 March 2026 00:52:20 +0000 (0:00:03.973) 0:03:49.168 ********** 2026-03-16 00:54:59.405975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.405979 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.405992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.405996 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.406003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.406011 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.406053 | orchestrator | 2026-03-16 00:54:59.406061 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-16 00:54:59.406067 | orchestrator | Monday 16 March 2026 00:52:21 +0000 (0:00:00.547) 0:03:49.715 ********** 2026-03-16 00:54:59.406073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406087 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.406093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406105 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.406111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406123 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.406129 | orchestrator | 2026-03-16 00:54:59.406135 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-16 00:54:59.406141 | orchestrator | Monday 16 March 2026 00:52:22 +0000 (0:00:00.784) 0:03:50.500 ********** 2026-03-16 00:54:59.406147 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.406153 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.406160 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.406168 | orchestrator | 2026-03-16 00:54:59.406172 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-16 00:54:59.406176 | orchestrator | Monday 16 March 2026 00:52:24 +0000 (0:00:02.042) 0:03:52.542 ********** 2026-03-16 00:54:59.406180 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.406184 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.406188 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.406192 | orchestrator | 2026-03-16 00:54:59.406195 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-16 00:54:59.406199 | orchestrator | Monday 16 March 2026 00:52:26 +0000 (0:00:02.037) 0:03:54.579 ********** 2026-03-16 00:54:59.406203 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.406207 | orchestrator | 2026-03-16 00:54:59.406211 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-16 00:54:59.406221 | orchestrator | Monday 16 March 2026 00:52:28 +0000 (0:00:01.737) 0:03:56.317 ********** 2026-03-16 00:54:59.406244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.406255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.406264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.406294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406303 | orchestrator | 2026-03-16 00:54:59.406307 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-16 00:54:59.406311 | orchestrator | Monday 16 March 2026 00:52:32 +0000 (0:00:04.650) 0:04:00.967 ********** 2026-03-16 00:54:59.406326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.406335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406348 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.406352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.406356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406367 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.406383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.406388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.406396 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.406400 | orchestrator | 2026-03-16 00:54:59.406404 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-16 00:54:59.406407 | orchestrator | Monday 16 March 2026 00:52:34 +0000 (0:00:01.393) 0:04:02.361 ********** 2026-03-16 00:54:59.406411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406432 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.406437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406478 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.406485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-16 00:54:59.406513 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.406520 | orchestrator | 2026-03-16 00:54:59.406526 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-16 00:54:59.406533 | orchestrator | Monday 16 March 2026 00:52:35 +0000 (0:00:00.937) 0:04:03.298 ********** 2026-03-16 00:54:59.406539 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.406546 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.406552 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.406559 | orchestrator | 2026-03-16 00:54:59.406566 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-16 00:54:59.406572 | orchestrator | Monday 16 March 2026 00:52:36 +0000 (0:00:01.496) 0:04:04.795 ********** 2026-03-16 00:54:59.406579 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.406583 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.406587 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.406591 | orchestrator | 2026-03-16 00:54:59.406595 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-16 00:54:59.406598 | orchestrator | Monday 16 March 2026 00:52:38 +0000 (0:00:02.328) 0:04:07.123 ********** 2026-03-16 00:54:59.406607 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.406611 | orchestrator | 2026-03-16 00:54:59.406614 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-16 00:54:59.406618 | orchestrator | Monday 16 March 2026 00:52:40 +0000 (0:00:01.748) 0:04:08.872 ********** 2026-03-16 00:54:59.406622 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-16 00:54:59.406627 | orchestrator | 2026-03-16 00:54:59.406631 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-16 00:54:59.406635 | orchestrator | Monday 16 March 2026 00:52:41 +0000 (0:00:00.889) 0:04:09.762 ********** 2026-03-16 00:54:59.406639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-16 00:54:59.406644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-16 00:54:59.406648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-16 00:54:59.406652 | orchestrator | 2026-03-16 00:54:59.406665 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-16 00:54:59.406670 | orchestrator | Monday 16 March 2026 00:52:46 +0000 (0:00:05.031) 0:04:14.793 ********** 2026-03-16 00:54:59.406674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-16 00:54:59.406678 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.406685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-16 00:54:59.406689 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.406693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-16 00:54:59.406700 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.406704 | orchestrator | 2026-03-16 00:54:59.406708 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-16 00:54:59.406712 | orchestrator | Monday 16 March 2026 00:52:47 +0000 (0:00:01.135) 0:04:15.928 ********** 2026-03-16 00:54:59.406716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-16 00:54:59.406720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-16 00:54:59.406724 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.406728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-16 00:54:59.406732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-16 00:54:59.406736 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.406740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-16 00:54:59.406762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-16 00:54:59.406769 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.406775 | orchestrator | 2026-03-16 00:54:59.406780 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-16 00:54:59.406784 | orchestrator | Monday 16 March 2026 00:52:49 +0000 (0:00:01.608) 0:04:17.537 ********** 2026-03-16 00:54:59.406788 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.406792 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.406795 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.406799 | orchestrator | 2026-03-16 00:54:59.406803 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-16 00:54:59.406807 | orchestrator | Monday 16 March 2026 00:52:52 +0000 (0:00:02.753) 0:04:20.290 ********** 2026-03-16 00:54:59.406811 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.406814 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.406818 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.406822 | orchestrator | 2026-03-16 00:54:59.406834 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-16 00:54:59.406839 | orchestrator | Monday 16 March 2026 00:52:55 +0000 (0:00:03.356) 0:04:23.647 ********** 2026-03-16 00:54:59.406843 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-16 00:54:59.406847 | orchestrator | 2026-03-16 00:54:59.406851 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-16 00:54:59.406859 | orchestrator | Monday 16 March 2026 00:52:56 +0000 (0:00:01.536) 0:04:25.183 ********** 2026-03-16 00:54:59.406866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-16 00:54:59.406870 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.406874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-16 00:54:59.406878 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.406882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-16 00:54:59.406886 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.406890 | orchestrator | 2026-03-16 00:54:59.406894 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-16 00:54:59.406900 | orchestrator | Monday 16 March 2026 00:52:58 +0000 (0:00:01.331) 0:04:26.515 ********** 2026-03-16 00:54:59.406906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-16 00:54:59.406911 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.406921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-16 00:54:59.406928 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.406934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-16 00:54:59.406945 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.406951 | orchestrator | 2026-03-16 00:54:59.406967 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-16 00:54:59.406973 | orchestrator | Monday 16 March 2026 00:52:59 +0000 (0:00:01.484) 0:04:28.000 ********** 2026-03-16 00:54:59.406979 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.406986 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.406991 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.406997 | orchestrator | 2026-03-16 00:54:59.407003 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-16 00:54:59.407009 | orchestrator | Monday 16 March 2026 00:53:01 +0000 (0:00:01.964) 0:04:29.964 ********** 2026-03-16 00:54:59.407016 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.407022 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.407028 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.407034 | orchestrator | 2026-03-16 00:54:59.407040 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-16 00:54:59.407046 | orchestrator | Monday 16 March 2026 00:53:04 +0000 (0:00:02.499) 0:04:32.464 ********** 2026-03-16 00:54:59.407050 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.407054 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.407057 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.407061 | orchestrator | 2026-03-16 00:54:59.407065 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-16 00:54:59.407074 | orchestrator | Monday 16 March 2026 00:53:07 +0000 (0:00:03.191) 0:04:35.656 ********** 2026-03-16 00:54:59.407079 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-16 00:54:59.407085 | orchestrator | 2026-03-16 00:54:59.407091 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-16 00:54:59.407097 | orchestrator | Monday 16 March 2026 00:53:08 +0000 (0:00:00.885) 0:04:36.541 ********** 2026-03-16 00:54:59.407104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-16 00:54:59.407110 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.407116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-16 00:54:59.407122 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.407128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-16 00:54:59.407134 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.407140 | orchestrator | 2026-03-16 00:54:59.407146 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-16 00:54:59.407157 | orchestrator | Monday 16 March 2026 00:53:09 +0000 (0:00:01.295) 0:04:37.837 ********** 2026-03-16 00:54:59.407164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-16 00:54:59.407170 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.407188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-16 00:54:59.407196 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.407202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-16 00:54:59.407209 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.407215 | orchestrator | 2026-03-16 00:54:59.407222 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-16 00:54:59.407232 | orchestrator | Monday 16 March 2026 00:53:10 +0000 (0:00:01.248) 0:04:39.085 ********** 2026-03-16 00:54:59.407239 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.407244 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.407250 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.407256 | orchestrator | 2026-03-16 00:54:59.407263 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-16 00:54:59.407269 | orchestrator | Monday 16 March 2026 00:53:12 +0000 (0:00:01.456) 0:04:40.542 ********** 2026-03-16 00:54:59.407275 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.407281 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.407288 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.407294 | orchestrator | 2026-03-16 00:54:59.407300 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-16 00:54:59.407306 | orchestrator | Monday 16 March 2026 00:53:14 +0000 (0:00:02.321) 0:04:42.864 ********** 2026-03-16 00:54:59.407312 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.407318 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.407324 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.407329 | orchestrator | 2026-03-16 00:54:59.407336 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-16 00:54:59.407340 | orchestrator | Monday 16 March 2026 00:53:17 +0000 (0:00:03.079) 0:04:45.943 ********** 2026-03-16 00:54:59.407343 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.407347 | orchestrator | 2026-03-16 00:54:59.407352 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-16 00:54:59.407358 | orchestrator | Monday 16 March 2026 00:53:19 +0000 (0:00:01.701) 0:04:47.645 ********** 2026-03-16 00:54:59.407365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.407379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 00:54:59.407386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.407417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.407425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.407429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 00:54:59.407441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 00:54:59.407445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.407473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.407477 | orchestrator | 2026-03-16 00:54:59.407481 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-16 00:54:59.407485 | orchestrator | Monday 16 March 2026 00:53:22 +0000 (0:00:03.426) 0:04:51.072 ********** 2026-03-16 00:54:59.407496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.407504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 00:54:59.407508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.407523 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.407534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.407538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 00:54:59.407544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.407559 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.407564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.407568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 00:54:59.407579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 00:54:59.407593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 00:54:59.407597 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.407601 | orchestrator | 2026-03-16 00:54:59.407607 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-16 00:54:59.407613 | orchestrator | Monday 16 March 2026 00:53:23 +0000 (0:00:00.660) 0:04:51.732 ********** 2026-03-16 00:54:59.407619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-16 00:54:59.407626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-16 00:54:59.407632 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.407639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-16 00:54:59.407645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-16 00:54:59.407651 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.407657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-16 00:54:59.407662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-16 00:54:59.407668 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.407674 | orchestrator | 2026-03-16 00:54:59.407680 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-16 00:54:59.407687 | orchestrator | Monday 16 March 2026 00:53:24 +0000 (0:00:01.349) 0:04:53.082 ********** 2026-03-16 00:54:59.407692 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.407698 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.407703 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.407709 | orchestrator | 2026-03-16 00:54:59.407715 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-16 00:54:59.407720 | orchestrator | Monday 16 March 2026 00:53:26 +0000 (0:00:01.257) 0:04:54.339 ********** 2026-03-16 00:54:59.407726 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.407732 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.407738 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.407743 | orchestrator | 2026-03-16 00:54:59.407765 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-16 00:54:59.407782 | orchestrator | Monday 16 March 2026 00:53:28 +0000 (0:00:01.992) 0:04:56.332 ********** 2026-03-16 00:54:59.407788 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.407793 | orchestrator | 2026-03-16 00:54:59.407799 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-16 00:54:59.407812 | orchestrator | Monday 16 March 2026 00:53:29 +0000 (0:00:01.468) 0:04:57.801 ********** 2026-03-16 00:54:59.407822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:54:59.407829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:54:59.407835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:54:59.407842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:54:59.407863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:54:59.407876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:54:59.407882 | orchestrator | 2026-03-16 00:54:59.407888 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-16 00:54:59.407894 | orchestrator | Monday 16 March 2026 00:53:35 +0000 (0:00:05.476) 0:05:03.277 ********** 2026-03-16 00:54:59.407900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-16 00:54:59.407906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-16 00:54:59.407925 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.407931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-16 00:54:59.407944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-16 00:54:59.407951 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.407956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-16 00:54:59.407963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-16 00:54:59.407976 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.407982 | orchestrator | 2026-03-16 00:54:59.407988 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-16 00:54:59.407994 | orchestrator | Monday 16 March 2026 00:53:35 +0000 (0:00:00.587) 0:05:03.865 ********** 2026-03-16 00:54:59.408009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-16 00:54:59.408017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-16 00:54:59.408023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-16 00:54:59.408030 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.408039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-16 00:54:59.408045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-16 00:54:59.408052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-16 00:54:59.408058 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.408064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-16 00:54:59.408075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-16 00:54:59.408082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-16 00:54:59.408088 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.408094 | orchestrator | 2026-03-16 00:54:59.408100 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-16 00:54:59.408109 | orchestrator | Monday 16 March 2026 00:53:36 +0000 (0:00:00.897) 0:05:04.762 ********** 2026-03-16 00:54:59.408116 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.408123 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.408130 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.408137 | orchestrator | 2026-03-16 00:54:59.408144 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-16 00:54:59.408150 | orchestrator | Monday 16 March 2026 00:53:37 +0000 (0:00:00.702) 0:05:05.465 ********** 2026-03-16 00:54:59.408157 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.408163 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.408169 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.408175 | orchestrator | 2026-03-16 00:54:59.408180 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-16 00:54:59.408186 | orchestrator | Monday 16 March 2026 00:53:38 +0000 (0:00:01.486) 0:05:06.952 ********** 2026-03-16 00:54:59.408197 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.408202 | orchestrator | 2026-03-16 00:54:59.408212 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-16 00:54:59.408221 | orchestrator | Monday 16 March 2026 00:53:40 +0000 (0:00:01.478) 0:05:08.430 ********** 2026-03-16 00:54:59.408230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-16 00:54:59.408252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporte2026-03-16 00:54:59 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:54:59.408259 | orchestrator | 2026-03-16 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:54:59.408266 | orchestrator | r', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 00:54:59.408277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-16 00:54:59.408284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 00:54:59.408297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-16 00:54:59.408360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 00:54:59.408372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-16 00:54:59.408405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-16 00:54:59.408410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-16 00:54:59.408433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-16 00:54:59.408451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-16 00:54:59.408498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-16 00:54:59.408508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408533 | orchestrator | 2026-03-16 00:54:59.408540 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-16 00:54:59.408546 | orchestrator | Monday 16 March 2026 00:53:44 +0000 (0:00:04.679) 0:05:13.110 ********** 2026-03-16 00:54:59.408552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-16 00:54:59.408559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 00:54:59.408566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-16 00:54:59.408614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-16 00:54:59.408621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408645 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.408652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-16 00:54:59.408659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 00:54:59.408670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-16 00:54:59.408818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-16 00:54:59.408842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408871 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.408878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-16 00:54:59.408884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 00:54:59.408897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-16 00:54:59.408931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-16 00:54:59.408938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 00:54:59.408956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 00:54:59.408962 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.408968 | orchestrator | 2026-03-16 00:54:59.408980 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-16 00:54:59.408990 | orchestrator | Monday 16 March 2026 00:53:46 +0000 (0:00:01.508) 0:05:14.618 ********** 2026-03-16 00:54:59.408997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-16 00:54:59.409004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-16 00:54:59.409011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-16 00:54:59.409018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-16 00:54:59.409024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-16 00:54:59.409030 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-16 00:54:59.409043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-16 00:54:59.409050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-16 00:54:59.409057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-16 00:54:59.409061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-16 00:54:59.409065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-16 00:54:59.409069 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-16 00:54:59.409084 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409090 | orchestrator | 2026-03-16 00:54:59.409096 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-16 00:54:59.409102 | orchestrator | Monday 16 March 2026 00:53:47 +0000 (0:00:01.272) 0:05:15.891 ********** 2026-03-16 00:54:59.409114 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409120 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409127 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409133 | orchestrator | 2026-03-16 00:54:59.409139 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-16 00:54:59.409146 | orchestrator | Monday 16 March 2026 00:53:48 +0000 (0:00:00.544) 0:05:16.436 ********** 2026-03-16 00:54:59.409152 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409158 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409164 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409170 | orchestrator | 2026-03-16 00:54:59.409177 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-16 00:54:59.409182 | orchestrator | Monday 16 March 2026 00:53:49 +0000 (0:00:01.583) 0:05:18.019 ********** 2026-03-16 00:54:59.409189 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.409196 | orchestrator | 2026-03-16 00:54:59.409206 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-16 00:54:59.409212 | orchestrator | Monday 16 March 2026 00:53:51 +0000 (0:00:01.984) 0:05:20.003 ********** 2026-03-16 00:54:59.409220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:54:59.409225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:54:59.409229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-16 00:54:59.409238 | orchestrator | 2026-03-16 00:54:59.409242 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-16 00:54:59.409250 | orchestrator | Monday 16 March 2026 00:53:54 +0000 (0:00:02.993) 0:05:22.996 ********** 2026-03-16 00:54:59.409257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-16 00:54:59.409261 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-16 00:54:59.409269 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-16 00:54:59.409277 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409281 | orchestrator | 2026-03-16 00:54:59.409285 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-16 00:54:59.409289 | orchestrator | Monday 16 March 2026 00:53:55 +0000 (0:00:00.412) 0:05:23.409 ********** 2026-03-16 00:54:59.409293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-16 00:54:59.409300 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-16 00:54:59.409308 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-16 00:54:59.409315 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409319 | orchestrator | 2026-03-16 00:54:59.409323 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-16 00:54:59.409327 | orchestrator | Monday 16 March 2026 00:53:56 +0000 (0:00:01.248) 0:05:24.658 ********** 2026-03-16 00:54:59.409333 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409337 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409341 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409345 | orchestrator | 2026-03-16 00:54:59.409349 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-16 00:54:59.409352 | orchestrator | Monday 16 March 2026 00:53:56 +0000 (0:00:00.487) 0:05:25.146 ********** 2026-03-16 00:54:59.409356 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409360 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409364 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409367 | orchestrator | 2026-03-16 00:54:59.409371 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-16 00:54:59.409375 | orchestrator | Monday 16 March 2026 00:53:58 +0000 (0:00:01.557) 0:05:26.703 ********** 2026-03-16 00:54:59.409379 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:54:59.409382 | orchestrator | 2026-03-16 00:54:59.409386 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-16 00:54:59.409390 | orchestrator | Monday 16 March 2026 00:54:00 +0000 (0:00:02.072) 0:05:28.776 ********** 2026-03-16 00:54:59.409399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.409406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.409417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.409428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.409441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.409448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-16 00:54:59.409454 | orchestrator | 2026-03-16 00:54:59.409461 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-16 00:54:59.409469 | orchestrator | Monday 16 March 2026 00:54:07 +0000 (0:00:06.679) 0:05:35.456 ********** 2026-03-16 00:54:59.409473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.409480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.409484 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.409495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.409499 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.409513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-16 00:54:59.409517 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409521 | orchestrator | 2026-03-16 00:54:59.409525 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-16 00:54:59.409528 | orchestrator | Monday 16 March 2026 00:54:08 +0000 (0:00:00.821) 0:05:36.277 ********** 2026-03-16 00:54:59.409532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409551 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409574 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-16 00:54:59.409593 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409597 | orchestrator | 2026-03-16 00:54:59.409600 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-16 00:54:59.409604 | orchestrator | Monday 16 March 2026 00:54:09 +0000 (0:00:01.944) 0:05:38.221 ********** 2026-03-16 00:54:59.409608 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.409612 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.409616 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.409619 | orchestrator | 2026-03-16 00:54:59.409623 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-16 00:54:59.409627 | orchestrator | Monday 16 March 2026 00:54:11 +0000 (0:00:01.206) 0:05:39.428 ********** 2026-03-16 00:54:59.409631 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.409635 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.409639 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.409642 | orchestrator | 2026-03-16 00:54:59.409646 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-16 00:54:59.409650 | orchestrator | Monday 16 March 2026 00:54:13 +0000 (0:00:02.214) 0:05:41.643 ********** 2026-03-16 00:54:59.409654 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409658 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409662 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409665 | orchestrator | 2026-03-16 00:54:59.409669 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-16 00:54:59.409673 | orchestrator | Monday 16 March 2026 00:54:13 +0000 (0:00:00.347) 0:05:41.990 ********** 2026-03-16 00:54:59.409677 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409683 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409687 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409691 | orchestrator | 2026-03-16 00:54:59.409695 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-16 00:54:59.409699 | orchestrator | Monday 16 March 2026 00:54:14 +0000 (0:00:00.342) 0:05:42.333 ********** 2026-03-16 00:54:59.409703 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409706 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409710 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409714 | orchestrator | 2026-03-16 00:54:59.409718 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-16 00:54:59.409722 | orchestrator | Monday 16 March 2026 00:54:14 +0000 (0:00:00.756) 0:05:43.089 ********** 2026-03-16 00:54:59.409725 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409729 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409737 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409741 | orchestrator | 2026-03-16 00:54:59.409760 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-16 00:54:59.409764 | orchestrator | Monday 16 March 2026 00:54:15 +0000 (0:00:00.361) 0:05:43.451 ********** 2026-03-16 00:54:59.409768 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409772 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409778 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409782 | orchestrator | 2026-03-16 00:54:59.409786 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-16 00:54:59.409790 | orchestrator | Monday 16 March 2026 00:54:15 +0000 (0:00:00.352) 0:05:43.803 ********** 2026-03-16 00:54:59.409793 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.409797 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.409801 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.409805 | orchestrator | 2026-03-16 00:54:59.409808 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-16 00:54:59.409812 | orchestrator | Monday 16 March 2026 00:54:16 +0000 (0:00:00.934) 0:05:44.738 ********** 2026-03-16 00:54:59.409816 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.409820 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.409824 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.409827 | orchestrator | 2026-03-16 00:54:59.409831 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-16 00:54:59.409835 | orchestrator | Monday 16 March 2026 00:54:17 +0000 (0:00:00.817) 0:05:45.556 ********** 2026-03-16 00:54:59.409839 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.409842 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.409846 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.409850 | orchestrator | 2026-03-16 00:54:59.409854 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-16 00:54:59.409858 | orchestrator | Monday 16 March 2026 00:54:17 +0000 (0:00:00.373) 0:05:45.929 ********** 2026-03-16 00:54:59.409861 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.409865 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.409869 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.409873 | orchestrator | 2026-03-16 00:54:59.409877 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-16 00:54:59.409880 | orchestrator | Monday 16 March 2026 00:54:18 +0000 (0:00:00.974) 0:05:46.903 ********** 2026-03-16 00:54:59.409884 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.409888 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.409892 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.409895 | orchestrator | 2026-03-16 00:54:59.409899 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-16 00:54:59.409903 | orchestrator | Monday 16 March 2026 00:54:20 +0000 (0:00:01.431) 0:05:48.334 ********** 2026-03-16 00:54:59.409907 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.409910 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.409914 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.409918 | orchestrator | 2026-03-16 00:54:59.409922 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-16 00:54:59.409925 | orchestrator | Monday 16 March 2026 00:54:21 +0000 (0:00:01.033) 0:05:49.368 ********** 2026-03-16 00:54:59.409929 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.409933 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.409937 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.409941 | orchestrator | 2026-03-16 00:54:59.409944 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-16 00:54:59.409948 | orchestrator | Monday 16 March 2026 00:54:25 +0000 (0:00:04.735) 0:05:54.104 ********** 2026-03-16 00:54:59.409952 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.409956 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.409959 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.409963 | orchestrator | 2026-03-16 00:54:59.409967 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-16 00:54:59.409974 | orchestrator | Monday 16 March 2026 00:54:28 +0000 (0:00:02.976) 0:05:57.080 ********** 2026-03-16 00:54:59.409978 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.409982 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.409985 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.409989 | orchestrator | 2026-03-16 00:54:59.409993 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-16 00:54:59.409997 | orchestrator | Monday 16 March 2026 00:54:37 +0000 (0:00:09.124) 0:06:06.205 ********** 2026-03-16 00:54:59.410001 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.410004 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.410008 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.410046 | orchestrator | 2026-03-16 00:54:59.410052 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-16 00:54:59.410056 | orchestrator | Monday 16 March 2026 00:54:42 +0000 (0:00:04.292) 0:06:10.498 ********** 2026-03-16 00:54:59.410060 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:54:59.410063 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:54:59.410067 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:54:59.410071 | orchestrator | 2026-03-16 00:54:59.410077 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-16 00:54:59.410083 | orchestrator | Monday 16 March 2026 00:54:51 +0000 (0:00:09.660) 0:06:20.159 ********** 2026-03-16 00:54:59.410089 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.410096 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.410102 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.410109 | orchestrator | 2026-03-16 00:54:59.410121 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-16 00:54:59.410127 | orchestrator | Monday 16 March 2026 00:54:52 +0000 (0:00:00.385) 0:06:20.545 ********** 2026-03-16 00:54:59.410135 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.410139 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.410143 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.410147 | orchestrator | 2026-03-16 00:54:59.410151 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-16 00:54:59.410154 | orchestrator | Monday 16 March 2026 00:54:52 +0000 (0:00:00.334) 0:06:20.880 ********** 2026-03-16 00:54:59.410158 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.410162 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.410166 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.410169 | orchestrator | 2026-03-16 00:54:59.410173 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-16 00:54:59.410177 | orchestrator | Monday 16 March 2026 00:54:53 +0000 (0:00:00.690) 0:06:21.570 ********** 2026-03-16 00:54:59.410181 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.410184 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.410188 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.410192 | orchestrator | 2026-03-16 00:54:59.410199 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-16 00:54:59.410203 | orchestrator | Monday 16 March 2026 00:54:53 +0000 (0:00:00.366) 0:06:21.937 ********** 2026-03-16 00:54:59.410207 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.410211 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.410214 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.410218 | orchestrator | 2026-03-16 00:54:59.410222 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-16 00:54:59.410226 | orchestrator | Monday 16 March 2026 00:54:54 +0000 (0:00:00.350) 0:06:22.288 ********** 2026-03-16 00:54:59.410229 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:54:59.410233 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:54:59.410237 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:54:59.410241 | orchestrator | 2026-03-16 00:54:59.410244 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-16 00:54:59.410252 | orchestrator | Monday 16 March 2026 00:54:54 +0000 (0:00:00.437) 0:06:22.725 ********** 2026-03-16 00:54:59.410256 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.410260 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.410264 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.410267 | orchestrator | 2026-03-16 00:54:59.410271 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-16 00:54:59.410275 | orchestrator | Monday 16 March 2026 00:54:55 +0000 (0:00:01.496) 0:06:24.222 ********** 2026-03-16 00:54:59.410279 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:54:59.410282 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:54:59.410286 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:54:59.410290 | orchestrator | 2026-03-16 00:54:59.410294 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:54:59.410298 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-16 00:54:59.410303 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-16 00:54:59.410306 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-16 00:54:59.410310 | orchestrator | 2026-03-16 00:54:59.410314 | orchestrator | 2026-03-16 00:54:59.410318 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:54:59.410321 | orchestrator | Monday 16 March 2026 00:54:56 +0000 (0:00:00.846) 0:06:25.068 ********** 2026-03-16 00:54:59.410325 | orchestrator | =============================================================================== 2026-03-16 00:54:59.410329 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.66s 2026-03-16 00:54:59.410333 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.12s 2026-03-16 00:54:59.410336 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.68s 2026-03-16 00:54:59.410340 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.48s 2026-03-16 00:54:59.410344 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.10s 2026-03-16 00:54:59.410347 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.03s 2026-03-16 00:54:59.410351 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.75s 2026-03-16 00:54:59.410355 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.74s 2026-03-16 00:54:59.410359 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.71s 2026-03-16 00:54:59.410362 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.68s 2026-03-16 00:54:59.410366 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.65s 2026-03-16 00:54:59.410370 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.45s 2026-03-16 00:54:59.410373 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.29s 2026-03-16 00:54:59.410377 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.25s 2026-03-16 00:54:59.410381 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.10s 2026-03-16 00:54:59.410385 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 4.10s 2026-03-16 00:54:59.410389 | orchestrator | loadbalancer : Remove mariadb.cfg if proxysql enabled ------------------- 4.04s 2026-03-16 00:54:59.410395 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.97s 2026-03-16 00:54:59.410398 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.97s 2026-03-16 00:54:59.410402 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.95s 2026-03-16 00:55:02.448889 | orchestrator | 2026-03-16 00:55:02 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:02.448969 | orchestrator | 2026-03-16 00:55:02 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:02.450582 | orchestrator | 2026-03-16 00:55:02 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:02.450634 | orchestrator | 2026-03-16 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:05.494996 | orchestrator | 2026-03-16 00:55:05 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:05.495557 | orchestrator | 2026-03-16 00:55:05 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:05.496682 | orchestrator | 2026-03-16 00:55:05 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:05.496708 | orchestrator | 2026-03-16 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:08.531870 | orchestrator | 2026-03-16 00:55:08 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:08.533396 | orchestrator | 2026-03-16 00:55:08 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:08.534109 | orchestrator | 2026-03-16 00:55:08 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:08.534151 | orchestrator | 2026-03-16 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:11.575880 | orchestrator | 2026-03-16 00:55:11 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:11.577637 | orchestrator | 2026-03-16 00:55:11 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:11.578099 | orchestrator | 2026-03-16 00:55:11 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:11.578134 | orchestrator | 2026-03-16 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:14.615796 | orchestrator | 2026-03-16 00:55:14 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:14.616493 | orchestrator | 2026-03-16 00:55:14 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:14.617470 | orchestrator | 2026-03-16 00:55:14 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:14.617501 | orchestrator | 2026-03-16 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:17.655050 | orchestrator | 2026-03-16 00:55:17 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:17.655620 | orchestrator | 2026-03-16 00:55:17 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:17.656977 | orchestrator | 2026-03-16 00:55:17 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:17.657008 | orchestrator | 2026-03-16 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:20.694996 | orchestrator | 2026-03-16 00:55:20 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:20.695262 | orchestrator | 2026-03-16 00:55:20 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:20.695659 | orchestrator | 2026-03-16 00:55:20 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:20.695686 | orchestrator | 2026-03-16 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:23.728449 | orchestrator | 2026-03-16 00:55:23 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:23.729457 | orchestrator | 2026-03-16 00:55:23 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:23.730548 | orchestrator | 2026-03-16 00:55:23 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:23.730606 | orchestrator | 2026-03-16 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:26.754140 | orchestrator | 2026-03-16 00:55:26 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:26.754535 | orchestrator | 2026-03-16 00:55:26 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:26.755356 | orchestrator | 2026-03-16 00:55:26 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:26.755721 | orchestrator | 2026-03-16 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:29.792272 | orchestrator | 2026-03-16 00:55:29 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:29.792932 | orchestrator | 2026-03-16 00:55:29 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:29.794220 | orchestrator | 2026-03-16 00:55:29 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:29.794253 | orchestrator | 2026-03-16 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:32.831205 | orchestrator | 2026-03-16 00:55:32 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:32.831955 | orchestrator | 2026-03-16 00:55:32 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:32.833120 | orchestrator | 2026-03-16 00:55:32 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:32.833160 | orchestrator | 2026-03-16 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:35.871802 | orchestrator | 2026-03-16 00:55:35 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:35.871942 | orchestrator | 2026-03-16 00:55:35 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:35.872819 | orchestrator | 2026-03-16 00:55:35 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:35.872839 | orchestrator | 2026-03-16 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:38.901425 | orchestrator | 2026-03-16 00:55:38 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:38.901703 | orchestrator | 2026-03-16 00:55:38 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:38.902804 | orchestrator | 2026-03-16 00:55:38 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:38.902835 | orchestrator | 2026-03-16 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:41.951627 | orchestrator | 2026-03-16 00:55:41 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:41.951690 | orchestrator | 2026-03-16 00:55:41 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:41.954613 | orchestrator | 2026-03-16 00:55:41 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:41.954663 | orchestrator | 2026-03-16 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:44.999746 | orchestrator | 2026-03-16 00:55:45 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:45.000199 | orchestrator | 2026-03-16 00:55:45 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:45.002212 | orchestrator | 2026-03-16 00:55:45 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:45.002266 | orchestrator | 2026-03-16 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:48.056300 | orchestrator | 2026-03-16 00:55:48 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:48.056391 | orchestrator | 2026-03-16 00:55:48 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:48.058239 | orchestrator | 2026-03-16 00:55:48 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:48.058311 | orchestrator | 2026-03-16 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:51.096813 | orchestrator | 2026-03-16 00:55:51 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:51.098538 | orchestrator | 2026-03-16 00:55:51 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:51.100063 | orchestrator | 2026-03-16 00:55:51 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:51.100363 | orchestrator | 2026-03-16 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:54.147766 | orchestrator | 2026-03-16 00:55:54 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:54.149401 | orchestrator | 2026-03-16 00:55:54 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:54.152100 | orchestrator | 2026-03-16 00:55:54 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:54.152141 | orchestrator | 2026-03-16 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:55:57.199334 | orchestrator | 2026-03-16 00:55:57 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:55:57.201340 | orchestrator | 2026-03-16 00:55:57 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:55:57.203043 | orchestrator | 2026-03-16 00:55:57 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:55:57.203093 | orchestrator | 2026-03-16 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:00.237824 | orchestrator | 2026-03-16 00:56:00 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:00.238451 | orchestrator | 2026-03-16 00:56:00 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:00.239723 | orchestrator | 2026-03-16 00:56:00 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:00.239747 | orchestrator | 2026-03-16 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:03.291154 | orchestrator | 2026-03-16 00:56:03 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:03.292862 | orchestrator | 2026-03-16 00:56:03 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:03.294262 | orchestrator | 2026-03-16 00:56:03 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:03.294336 | orchestrator | 2026-03-16 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:06.345128 | orchestrator | 2026-03-16 00:56:06 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:06.347302 | orchestrator | 2026-03-16 00:56:06 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:06.350126 | orchestrator | 2026-03-16 00:56:06 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:06.350474 | orchestrator | 2026-03-16 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:09.403966 | orchestrator | 2026-03-16 00:56:09 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:09.406061 | orchestrator | 2026-03-16 00:56:09 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:09.407793 | orchestrator | 2026-03-16 00:56:09 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:09.407838 | orchestrator | 2026-03-16 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:12.444001 | orchestrator | 2026-03-16 00:56:12 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:12.446185 | orchestrator | 2026-03-16 00:56:12 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:12.447723 | orchestrator | 2026-03-16 00:56:12 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:12.447784 | orchestrator | 2026-03-16 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:15.490947 | orchestrator | 2026-03-16 00:56:15 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:15.492084 | orchestrator | 2026-03-16 00:56:15 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:15.493694 | orchestrator | 2026-03-16 00:56:15 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:15.493856 | orchestrator | 2026-03-16 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:18.527885 | orchestrator | 2026-03-16 00:56:18 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:18.531011 | orchestrator | 2026-03-16 00:56:18 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:18.532407 | orchestrator | 2026-03-16 00:56:18 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:18.534786 | orchestrator | 2026-03-16 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:21.581702 | orchestrator | 2026-03-16 00:56:21 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:21.583789 | orchestrator | 2026-03-16 00:56:21 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:21.585827 | orchestrator | 2026-03-16 00:56:21 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:21.586294 | orchestrator | 2026-03-16 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:24.626988 | orchestrator | 2026-03-16 00:56:24 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:24.628478 | orchestrator | 2026-03-16 00:56:24 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:24.631850 | orchestrator | 2026-03-16 00:56:24 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:24.631904 | orchestrator | 2026-03-16 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:27.673629 | orchestrator | 2026-03-16 00:56:27 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:27.673839 | orchestrator | 2026-03-16 00:56:27 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:27.676176 | orchestrator | 2026-03-16 00:56:27 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:27.676255 | orchestrator | 2026-03-16 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:30.715338 | orchestrator | 2026-03-16 00:56:30 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:30.717049 | orchestrator | 2026-03-16 00:56:30 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:30.717941 | orchestrator | 2026-03-16 00:56:30 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:30.718089 | orchestrator | 2026-03-16 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:33.768456 | orchestrator | 2026-03-16 00:56:33 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:33.770070 | orchestrator | 2026-03-16 00:56:33 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:33.773636 | orchestrator | 2026-03-16 00:56:33 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:33.773706 | orchestrator | 2026-03-16 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:36.822060 | orchestrator | 2026-03-16 00:56:36 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:36.823998 | orchestrator | 2026-03-16 00:56:36 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:36.825947 | orchestrator | 2026-03-16 00:56:36 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:36.826168 | orchestrator | 2026-03-16 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:39.872231 | orchestrator | 2026-03-16 00:56:39 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:39.874661 | orchestrator | 2026-03-16 00:56:39 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:39.876474 | orchestrator | 2026-03-16 00:56:39 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:39.876582 | orchestrator | 2026-03-16 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:42.924842 | orchestrator | 2026-03-16 00:56:42 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:42.926760 | orchestrator | 2026-03-16 00:56:42 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:42.928954 | orchestrator | 2026-03-16 00:56:42 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:42.929000 | orchestrator | 2026-03-16 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:45.977756 | orchestrator | 2026-03-16 00:56:45 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:45.979926 | orchestrator | 2026-03-16 00:56:45 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:45.981883 | orchestrator | 2026-03-16 00:56:45 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:45.981914 | orchestrator | 2026-03-16 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:49.038647 | orchestrator | 2026-03-16 00:56:49 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:49.040381 | orchestrator | 2026-03-16 00:56:49 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:49.041604 | orchestrator | 2026-03-16 00:56:49 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:49.041653 | orchestrator | 2026-03-16 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:52.091664 | orchestrator | 2026-03-16 00:56:52 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:52.093171 | orchestrator | 2026-03-16 00:56:52 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:52.095023 | orchestrator | 2026-03-16 00:56:52 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:52.095159 | orchestrator | 2026-03-16 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:55.154642 | orchestrator | 2026-03-16 00:56:55 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:55.156444 | orchestrator | 2026-03-16 00:56:55 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:55.158743 | orchestrator | 2026-03-16 00:56:55 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:55.158780 | orchestrator | 2026-03-16 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:56:58.200456 | orchestrator | 2026-03-16 00:56:58 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:56:58.202803 | orchestrator | 2026-03-16 00:56:58 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:56:58.205231 | orchestrator | 2026-03-16 00:56:58 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:56:58.205385 | orchestrator | 2026-03-16 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:01.254432 | orchestrator | 2026-03-16 00:57:01 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:57:01.254721 | orchestrator | 2026-03-16 00:57:01 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:01.256120 | orchestrator | 2026-03-16 00:57:01 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:01.256164 | orchestrator | 2026-03-16 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:04.322052 | orchestrator | 2026-03-16 00:57:04 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:57:04.325728 | orchestrator | 2026-03-16 00:57:04 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:04.327488 | orchestrator | 2026-03-16 00:57:04 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:04.327541 | orchestrator | 2026-03-16 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:07.375671 | orchestrator | 2026-03-16 00:57:07 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:57:07.377851 | orchestrator | 2026-03-16 00:57:07 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:07.380876 | orchestrator | 2026-03-16 00:57:07 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:07.380960 | orchestrator | 2026-03-16 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:10.421027 | orchestrator | 2026-03-16 00:57:10 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:57:10.423113 | orchestrator | 2026-03-16 00:57:10 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:10.424937 | orchestrator | 2026-03-16 00:57:10 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:10.424992 | orchestrator | 2026-03-16 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:13.472090 | orchestrator | 2026-03-16 00:57:13 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state STARTED 2026-03-16 00:57:13.472811 | orchestrator | 2026-03-16 00:57:13 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:13.475075 | orchestrator | 2026-03-16 00:57:13 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:13.475699 | orchestrator | 2026-03-16 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:16.538131 | orchestrator | 2026-03-16 00:57:16 | INFO  | Task fb753a71-86c9-413d-a13a-600694dacbfc is in state SUCCESS 2026-03-16 00:57:16.539867 | orchestrator | 2026-03-16 00:57:16.539938 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-16 00:57:16.539948 | orchestrator | 2.16.14 2026-03-16 00:57:16.539955 | orchestrator | 2026-03-16 00:57:16.539962 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-16 00:57:16.539969 | orchestrator | 2026-03-16 00:57:16.539975 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-16 00:57:16.539982 | orchestrator | Monday 16 March 2026 00:46:14 +0000 (0:00:00.924) 0:00:00.924 ********** 2026-03-16 00:57:16.539990 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.539997 | orchestrator | 2026-03-16 00:57:16.540003 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-16 00:57:16.540009 | orchestrator | Monday 16 March 2026 00:46:15 +0000 (0:00:01.357) 0:00:02.282 ********** 2026-03-16 00:57:16.540015 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.540022 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.540028 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.540034 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.540040 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.540046 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.540052 | orchestrator | 2026-03-16 00:57:16.540059 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-16 00:57:16.540085 | orchestrator | Monday 16 March 2026 00:46:17 +0000 (0:00:01.680) 0:00:03.963 ********** 2026-03-16 00:57:16.540095 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.540105 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.540114 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.540124 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.540133 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.540143 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.540152 | orchestrator | 2026-03-16 00:57:16.540162 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-16 00:57:16.540171 | orchestrator | Monday 16 March 2026 00:46:18 +0000 (0:00:00.720) 0:00:04.683 ********** 2026-03-16 00:57:16.540177 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.540183 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.540188 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.540194 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.540200 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.540205 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.540211 | orchestrator | 2026-03-16 00:57:16.540217 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-16 00:57:16.540223 | orchestrator | Monday 16 March 2026 00:46:19 +0000 (0:00:00.911) 0:00:05.595 ********** 2026-03-16 00:57:16.540229 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.540234 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.540240 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.540246 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.540251 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.540257 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.540262 | orchestrator | 2026-03-16 00:57:16.540268 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-16 00:57:16.540274 | orchestrator | Monday 16 March 2026 00:46:20 +0000 (0:00:00.824) 0:00:06.420 ********** 2026-03-16 00:57:16.540296 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.540302 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.540308 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.540313 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.540319 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.540325 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.540334 | orchestrator | 2026-03-16 00:57:16.540344 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-16 00:57:16.540353 | orchestrator | Monday 16 March 2026 00:46:20 +0000 (0:00:00.730) 0:00:07.150 ********** 2026-03-16 00:57:16.540363 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.540372 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.540382 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.540421 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.540428 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.540433 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.540439 | orchestrator | 2026-03-16 00:57:16.540536 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-16 00:57:16.540546 | orchestrator | Monday 16 March 2026 00:46:21 +0000 (0:00:01.182) 0:00:08.332 ********** 2026-03-16 00:57:16.540553 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.540582 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.540589 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.540612 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.540619 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.540626 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.540634 | orchestrator | 2026-03-16 00:57:16.540759 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-16 00:57:16.540770 | orchestrator | Monday 16 March 2026 00:46:23 +0000 (0:00:01.928) 0:00:10.261 ********** 2026-03-16 00:57:16.540781 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.540792 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.540802 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.540813 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.540819 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.540825 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.540831 | orchestrator | 2026-03-16 00:57:16.540837 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-16 00:57:16.540843 | orchestrator | Monday 16 March 2026 00:46:25 +0000 (0:00:01.541) 0:00:11.803 ********** 2026-03-16 00:57:16.540849 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-16 00:57:16.540855 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-16 00:57:16.540861 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-16 00:57:16.540867 | orchestrator | 2026-03-16 00:57:16.540872 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-16 00:57:16.540878 | orchestrator | Monday 16 March 2026 00:46:26 +0000 (0:00:00.673) 0:00:12.476 ********** 2026-03-16 00:57:16.540884 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.540889 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.540895 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.540915 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.540921 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.540927 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.540932 | orchestrator | 2026-03-16 00:57:16.540938 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-16 00:57:16.540944 | orchestrator | Monday 16 March 2026 00:46:27 +0000 (0:00:01.807) 0:00:14.283 ********** 2026-03-16 00:57:16.540950 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-16 00:57:16.540956 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-16 00:57:16.540961 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-16 00:57:16.540977 | orchestrator | 2026-03-16 00:57:16.540982 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-16 00:57:16.540988 | orchestrator | Monday 16 March 2026 00:46:31 +0000 (0:00:03.786) 0:00:18.070 ********** 2026-03-16 00:57:16.540994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-16 00:57:16.541000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-16 00:57:16.541006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-16 00:57:16.541012 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541018 | orchestrator | 2026-03-16 00:57:16.541030 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-16 00:57:16.541036 | orchestrator | Monday 16 March 2026 00:46:32 +0000 (0:00:00.857) 0:00:18.927 ********** 2026-03-16 00:57:16.541044 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.541053 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.541059 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.541065 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541071 | orchestrator | 2026-03-16 00:57:16.541076 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-16 00:57:16.541082 | orchestrator | Monday 16 March 2026 00:46:34 +0000 (0:00:01.497) 0:00:20.425 ********** 2026-03-16 00:57:16.541090 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.541099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.541105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.541111 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541117 | orchestrator | 2026-03-16 00:57:16.541123 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-16 00:57:16.541129 | orchestrator | Monday 16 March 2026 00:46:34 +0000 (0:00:00.891) 0:00:21.317 ********** 2026-03-16 00:57:16.541141 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-16 00:46:29.282036', 'end': '2026-03-16 00:46:29.389132', 'delta': '0:00:00.107096', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.541155 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-16 00:46:30.468320', 'end': '2026-03-16 00:46:30.574982', 'delta': '0:00:00.106662', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.541164 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-16 00:46:31.217329', 'end': '2026-03-16 00:46:31.317371', 'delta': '0:00:00.100042', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.541170 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541176 | orchestrator | 2026-03-16 00:57:16.541182 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-16 00:57:16.541188 | orchestrator | Monday 16 March 2026 00:46:35 +0000 (0:00:00.457) 0:00:21.774 ********** 2026-03-16 00:57:16.541194 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.541200 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.541205 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.541211 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.541217 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.541223 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.541228 | orchestrator | 2026-03-16 00:57:16.541234 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-16 00:57:16.541240 | orchestrator | Monday 16 March 2026 00:46:38 +0000 (0:00:03.168) 0:00:24.942 ********** 2026-03-16 00:57:16.541246 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-16 00:57:16.541252 | orchestrator | 2026-03-16 00:57:16.541257 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-16 00:57:16.541263 | orchestrator | Monday 16 March 2026 00:46:40 +0000 (0:00:02.019) 0:00:26.963 ********** 2026-03-16 00:57:16.541269 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541275 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.541280 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.541286 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.541292 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.541298 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.541303 | orchestrator | 2026-03-16 00:57:16.541309 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-16 00:57:16.541315 | orchestrator | Monday 16 March 2026 00:46:42 +0000 (0:00:01.703) 0:00:28.666 ********** 2026-03-16 00:57:16.541321 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541520 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.541533 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.541539 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.541555 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.541568 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.541634 | orchestrator | 2026-03-16 00:57:16.541668 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-16 00:57:16.541678 | orchestrator | Monday 16 March 2026 00:46:43 +0000 (0:00:01.351) 0:00:30.018 ********** 2026-03-16 00:57:16.541687 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541697 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.541706 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.541715 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.541725 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.541733 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.541739 | orchestrator | 2026-03-16 00:57:16.541744 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-16 00:57:16.541750 | orchestrator | Monday 16 March 2026 00:46:44 +0000 (0:00:00.827) 0:00:30.846 ********** 2026-03-16 00:57:16.541756 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541761 | orchestrator | 2026-03-16 00:57:16.541767 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-16 00:57:16.541773 | orchestrator | Monday 16 March 2026 00:46:44 +0000 (0:00:00.122) 0:00:30.969 ********** 2026-03-16 00:57:16.541779 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541784 | orchestrator | 2026-03-16 00:57:16.541790 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-16 00:57:16.541796 | orchestrator | Monday 16 March 2026 00:46:44 +0000 (0:00:00.181) 0:00:31.150 ********** 2026-03-16 00:57:16.541801 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541807 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.541813 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.541827 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.541833 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.541839 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.541845 | orchestrator | 2026-03-16 00:57:16.541850 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-16 00:57:16.541856 | orchestrator | Monday 16 March 2026 00:46:46 +0000 (0:00:01.514) 0:00:32.665 ********** 2026-03-16 00:57:16.541862 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541867 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.541873 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.541878 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.541884 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.541890 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.541895 | orchestrator | 2026-03-16 00:57:16.541901 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-16 00:57:16.541907 | orchestrator | Monday 16 March 2026 00:46:47 +0000 (0:00:00.757) 0:00:33.422 ********** 2026-03-16 00:57:16.541912 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541918 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.541924 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.541929 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.541935 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.541940 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.541946 | orchestrator | 2026-03-16 00:57:16.541952 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-16 00:57:16.541964 | orchestrator | Monday 16 March 2026 00:46:47 +0000 (0:00:00.746) 0:00:34.168 ********** 2026-03-16 00:57:16.541970 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.541975 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.541981 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.541987 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.541992 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.541998 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.542004 | orchestrator | 2026-03-16 00:57:16.542009 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-16 00:57:16.542060 | orchestrator | Monday 16 March 2026 00:46:49 +0000 (0:00:01.220) 0:00:35.388 ********** 2026-03-16 00:57:16.542066 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.542072 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.542078 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.542084 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.542090 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.542096 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.542101 | orchestrator | 2026-03-16 00:57:16.542107 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-16 00:57:16.542113 | orchestrator | Monday 16 March 2026 00:46:49 +0000 (0:00:00.626) 0:00:36.015 ********** 2026-03-16 00:57:16.542119 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.542125 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.542130 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.542136 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.542142 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.542148 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.542154 | orchestrator | 2026-03-16 00:57:16.542159 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-16 00:57:16.542166 | orchestrator | Monday 16 March 2026 00:46:50 +0000 (0:00:00.692) 0:00:36.708 ********** 2026-03-16 00:57:16.542171 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.542177 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.542183 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.542223 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.542230 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.542235 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.542241 | orchestrator | 2026-03-16 00:57:16.542247 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-16 00:57:16.542253 | orchestrator | Monday 16 March 2026 00:46:51 +0000 (0:00:00.759) 0:00:37.467 ********** 2026-03-16 00:57:16.542312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71e0430a--6bf1--53ec--905e--7c884e89f784-osd--block--71e0430a--6bf1--53ec--905e--7c884e89f784', 'dm-uuid-LVM-wfWQF1CMpG436vHAFB7PLE7Lu4MagAEY3zN1PL2no4vUlqjNM9LHgIqpZ4CT6Met'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--40b418b1--0bd6--568c--82b5--8ddc4abd3365-osd--block--40b418b1--0bd6--568c--82b5--8ddc4abd3365', 'dm-uuid-LVM-i8dnrqRhoTtIY3c7MgqceRpLo4rsKyC9qSnNF0kDUEfKM0Wf1KtCNzervq0fTfSo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--71e0430a--6bf1--53ec--905e--7c884e89f784-osd--block--71e0430a--6bf1--53ec--905e--7c884e89f784'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MQHZCZ-2P0q-WEBW-lB0Y-5ZU4-EERo-X0rt2s', 'scsi-0QEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9', 'scsi-SQEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--40b418b1--0bd6--568c--82b5--8ddc4abd3365-osd--block--40b418b1--0bd6--568c--82b5--8ddc4abd3365'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZoV2Pm-dKR1-PRe1-hXHc-O2KZ-sJw6-5NOhRq', 'scsi-0QEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c', 'scsi-SQEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f', 'scsi-SQEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ded6401a--969b--5c16--b1be--1b69fe43ded8-osd--block--ded6401a--969b--5c16--b1be--1b69fe43ded8', 'dm-uuid-LVM-C6h8PY31H7NF0avlMPMNuk3fumzXPicAnoRcVmPbxL43O22LzoMTek6lfK0ZLeGD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01ad088d--533b--5bd8--92eb--284afc0ad32d-osd--block--01ad088d--533b--5bd8--92eb--284afc0ad32d', 'dm-uuid-LVM-0sBWRcIEYVfhS9z0btZt3E1nbLdVN1xAXwO6Fyl2iDazFEDpyXKpLNzLrLEf9N8c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542643 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.542649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20eacd0a--f744--531e--8511--c5afb936ef86-osd--block--20eacd0a--f744--531e--8511--c5afb936ef86', 'dm-uuid-LVM-lSXyLnOov7r2zaqmGGp5HpJcdapQhsc2WIkvSCn26GbMJKocRSo2V2ZbLipfysP3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2695ca6--70a1--5c1a--b7de--886954e6bf07-osd--block--c2695ca6--70a1--5c1a--b7de--886954e6bf07', 'dm-uuid-LVM-m5OlQhBlwbjaWKZJHpKDAF3Qtrt8tOpo0N4ndCl65u5FrpPM2sAQRID2cruaNFRe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ded6401a--969b--5c16--b1be--1b69fe43ded8-osd--block--ded6401a--969b--5c16--b1be--1b69fe43ded8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aGd2Ie-b0yj-4Gpc-NVJZ-kPi6-fxvA-wG3FaP', 'scsi-0QEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358', 'scsi-SQEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--01ad088d--533b--5bd8--92eb--284afc0ad32d-osd--block--01ad088d--533b--5bd8--92eb--284afc0ad32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vB3OGU-zA3Q-mHqp-oSQI-LWGG-LkAy-H1f9lO', 'scsi-0QEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649', 'scsi-SQEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part1', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part14', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part15', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part16', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--20eacd0a--f744--531e--8511--c5afb936ef86-osd--block--20eacd0a--f744--531e--8511--c5afb936ef86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZyUzGW-HjlS-XV4V-WGhw-az3f-AHso-PXH4dy', 'scsi-0QEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096', 'scsi-SQEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c2695ca6--70a1--5c1a--b7de--886954e6bf07-osd--block--c2695ca6--70a1--5c1a--b7de--886954e6bf07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ilfj6s-Om41-y7OG-sdvd-dNA1-ULZC-j6tQ2n', 'scsi-0QEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a', 'scsi-SQEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7', 'scsi-SQEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22', 'scsi-SQEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.542939 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.542946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.542952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a', 'scsi-SQEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.543209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.543218 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.543227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924', 'scsi-SQEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.543351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.543364 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.543373 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.543382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:57:16.543574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a', 'scsi-SQEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.543628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:57:16.543635 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.543641 | orchestrator | 2026-03-16 00:57:16.543648 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-16 00:57:16.543679 | orchestrator | Monday 16 March 2026 00:46:53 +0000 (0:00:02.003) 0:00:39.470 ********** 2026-03-16 00:57:16.543690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71e0430a--6bf1--53ec--905e--7c884e89f784-osd--block--71e0430a--6bf1--53ec--905e--7c884e89f784', 'dm-uuid-LVM-wfWQF1CMpG436vHAFB7PLE7Lu4MagAEY3zN1PL2no4vUlqjNM9LHgIqpZ4CT6Met'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--40b418b1--0bd6--568c--82b5--8ddc4abd3365-osd--block--40b418b1--0bd6--568c--82b5--8ddc4abd3365', 'dm-uuid-LVM-i8dnrqRhoTtIY3c7MgqceRpLo4rsKyC9qSnNF0kDUEfKM0Wf1KtCNzervq0fTfSo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543753 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543789 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543798 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543955 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--71e0430a--6bf1--53ec--905e--7c884e89f784-osd--block--71e0430a--6bf1--53ec--905e--7c884e89f784'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MQHZCZ-2P0q-WEBW-lB0Y-5ZU4-EERo-X0rt2s', 'scsi-0QEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9', 'scsi-SQEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--40b418b1--0bd6--568c--82b5--8ddc4abd3365-osd--block--40b418b1--0bd6--568c--82b5--8ddc4abd3365'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZoV2Pm-dKR1-PRe1-hXHc-O2KZ-sJw6-5NOhRq', 'scsi-0QEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c', 'scsi-SQEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.543999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f', 'scsi-SQEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544024 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544034 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.544047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ded6401a--969b--5c16--b1be--1b69fe43ded8-osd--block--ded6401a--969b--5c16--b1be--1b69fe43ded8', 'dm-uuid-LVM-C6h8PY31H7NF0avlMPMNuk3fumzXPicAnoRcVmPbxL43O22LzoMTek6lfK0ZLeGD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544057 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01ad088d--533b--5bd8--92eb--284afc0ad32d-osd--block--01ad088d--533b--5bd8--92eb--284afc0ad32d', 'dm-uuid-LVM-0sBWRcIEYVfhS9z0btZt3E1nbLdVN1xAXwO6Fyl2iDazFEDpyXKpLNzLrLEf9N8c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544072 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544081 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544090 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20eacd0a--f744--531e--8511--c5afb936ef86-osd--block--20eacd0a--f744--531e--8511--c5afb936ef86', 'dm-uuid-LVM-lSXyLnOov7r2zaqmGGp5HpJcdapQhsc2WIkvSCn26GbMJKocRSo2V2ZbLipfysP3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544113 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544129 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2695ca6--70a1--5c1a--b7de--886954e6bf07-osd--block--c2695ca6--70a1--5c1a--b7de--886954e6bf07', 'dm-uuid-LVM-m5OlQhBlwbjaWKZJHpKDAF3Qtrt8tOpo0N4ndCl65u5FrpPM2sAQRID2cruaNFRe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544139 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544166 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544176 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544210 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544224 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544426 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544513 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544523 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544539 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544551 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544565 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544578 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544583 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544595 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544601 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544613 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a', 'scsi-SQEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f1441854-4f1b-4d8e-b300-e2132404da8a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544634 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544645 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544652 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544658 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ded6401a--969b--5c16--b1be--1b69fe43ded8-osd--block--ded6401a--969b--5c16--b1be--1b69fe43ded8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aGd2Ie-b0yj-4Gpc-NVJZ-kPi6-fxvA-wG3FaP', 'scsi-0QEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358', 'scsi-SQEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544668 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544682 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--01ad088d--533b--5bd8--92eb--284afc0ad32d-osd--block--01ad088d--533b--5bd8--92eb--284afc0ad32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vB3OGU-zA3Q-mHqp-oSQI-LWGG-LkAy-H1f9lO', 'scsi-0QEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649', 'scsi-SQEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544693 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544699 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22', 'scsi-SQEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544706 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544723 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part1', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part14', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part15', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part16', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544734 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544743 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544752 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544762 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544787 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--20eacd0a--f744--531e--8511--c5afb936ef86-osd--block--20eacd0a--f744--531e--8511--c5afb936ef86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZyUzGW-HjlS-XV4V-WGhw-az3f-AHso-PXH4dy', 'scsi-0QEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096', 'scsi-SQEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544808 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544818 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.544829 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544837 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.544847 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c2695ca6--70a1--5c1a--b7de--886954e6bf07-osd--block--c2695ca6--70a1--5c1a--b7de--886954e6bf07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ilfj6s-Om41-y7OG-sdvd-dNA1-ULZC-j6tQ2n', 'scsi-0QEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a', 'scsi-SQEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544882 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924', 'scsi-SQEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d6913c-6b49-418e-9a91-33c70485f924-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544902 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7', 'scsi-SQEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544923 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.544943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544950 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544964 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544971 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.544977 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544983 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544989 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.544995 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.545011 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.545022 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.545032 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a', 'scsi-SQEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb973c19-0af3-4cee-977b-c7b07b1fc75a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.545039 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:57:16.545058 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.545064 | orchestrator | 2026-03-16 00:57:16.545080 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-16 00:57:16.545087 | orchestrator | Monday 16 March 2026 00:46:54 +0000 (0:00:00.985) 0:00:40.456 ********** 2026-03-16 00:57:16.545093 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.545099 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.545105 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.545111 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.545116 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.545122 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.545148 | orchestrator | 2026-03-16 00:57:16.545154 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-16 00:57:16.545160 | orchestrator | Monday 16 March 2026 00:46:55 +0000 (0:00:01.319) 0:00:41.775 ********** 2026-03-16 00:57:16.545166 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.545172 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.545177 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.545189 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.545195 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.545201 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.545207 | orchestrator | 2026-03-16 00:57:16.545238 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-16 00:57:16.545244 | orchestrator | Monday 16 March 2026 00:46:56 +0000 (0:00:00.862) 0:00:42.638 ********** 2026-03-16 00:57:16.545250 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.545256 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.545261 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.545271 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.545277 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.545283 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.545288 | orchestrator | 2026-03-16 00:57:16.545294 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-16 00:57:16.545300 | orchestrator | Monday 16 March 2026 00:46:57 +0000 (0:00:00.866) 0:00:43.505 ********** 2026-03-16 00:57:16.545306 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.545311 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.545317 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.545323 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.545332 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.545341 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.545351 | orchestrator | 2026-03-16 00:57:16.545370 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-16 00:57:16.545379 | orchestrator | Monday 16 March 2026 00:46:57 +0000 (0:00:00.638) 0:00:44.143 ********** 2026-03-16 00:57:16.545388 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.545398 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.545407 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.545416 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.545426 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.545435 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.545495 | orchestrator | 2026-03-16 00:57:16.545512 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-16 00:57:16.545520 | orchestrator | Monday 16 March 2026 00:46:58 +0000 (0:00:01.199) 0:00:45.342 ********** 2026-03-16 00:57:16.545529 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.545538 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.545547 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.545556 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.545565 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.545575 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.545584 | orchestrator | 2026-03-16 00:57:16.545593 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-16 00:57:16.545612 | orchestrator | Monday 16 March 2026 00:46:59 +0000 (0:00:00.802) 0:00:46.145 ********** 2026-03-16 00:57:16.545622 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-16 00:57:16.545632 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-16 00:57:16.545641 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-16 00:57:16.545652 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-16 00:57:16.545662 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-16 00:57:16.545670 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-16 00:57:16.545676 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-16 00:57:16.545682 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-16 00:57:16.545691 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-16 00:57:16.545700 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-16 00:57:16.545709 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-16 00:57:16.545717 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-16 00:57:16.545725 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-16 00:57:16.545733 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-16 00:57:16.545741 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-16 00:57:16.545749 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-16 00:57:16.545759 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-16 00:57:16.545769 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-16 00:57:16.545777 | orchestrator | 2026-03-16 00:57:16.545783 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-16 00:57:16.545789 | orchestrator | Monday 16 March 2026 00:47:03 +0000 (0:00:03.878) 0:00:50.024 ********** 2026-03-16 00:57:16.545795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-16 00:57:16.545801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-16 00:57:16.545807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-16 00:57:16.545813 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.545819 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-16 00:57:16.545825 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-16 00:57:16.545831 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-16 00:57:16.545837 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.545843 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-16 00:57:16.545870 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-16 00:57:16.545876 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-16 00:57:16.545882 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.545888 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-16 00:57:16.545893 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-16 00:57:16.545899 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-16 00:57:16.545905 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-16 00:57:16.545911 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-16 00:57:16.545917 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.545922 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-16 00:57:16.545928 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.545934 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-16 00:57:16.545942 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-16 00:57:16.545957 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-16 00:57:16.545969 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.545979 | orchestrator | 2026-03-16 00:57:16.545989 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-16 00:57:16.546073 | orchestrator | Monday 16 March 2026 00:47:04 +0000 (0:00:01.054) 0:00:51.079 ********** 2026-03-16 00:57:16.546087 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.546097 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.546107 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.546119 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.546129 | orchestrator | 2026-03-16 00:57:16.546139 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-16 00:57:16.546149 | orchestrator | Monday 16 March 2026 00:47:06 +0000 (0:00:01.465) 0:00:52.544 ********** 2026-03-16 00:57:16.546159 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.546168 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.546176 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.546185 | orchestrator | 2026-03-16 00:57:16.546194 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-16 00:57:16.546203 | orchestrator | Monday 16 March 2026 00:47:06 +0000 (0:00:00.483) 0:00:53.027 ********** 2026-03-16 00:57:16.546213 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.546222 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.546232 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.546242 | orchestrator | 2026-03-16 00:57:16.546251 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-16 00:57:16.546261 | orchestrator | Monday 16 March 2026 00:47:07 +0000 (0:00:00.400) 0:00:53.428 ********** 2026-03-16 00:57:16.546270 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.546279 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.546289 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.546298 | orchestrator | 2026-03-16 00:57:16.546307 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-16 00:57:16.546317 | orchestrator | Monday 16 March 2026 00:47:08 +0000 (0:00:01.018) 0:00:54.446 ********** 2026-03-16 00:57:16.546326 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.546336 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.546347 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.546357 | orchestrator | 2026-03-16 00:57:16.546367 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-16 00:57:16.546378 | orchestrator | Monday 16 March 2026 00:47:08 +0000 (0:00:00.644) 0:00:55.091 ********** 2026-03-16 00:57:16.546388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.546398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.546409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.546419 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.546429 | orchestrator | 2026-03-16 00:57:16.546439 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-16 00:57:16.546468 | orchestrator | Monday 16 March 2026 00:47:09 +0000 (0:00:00.466) 0:00:55.557 ********** 2026-03-16 00:57:16.546475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.546481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.546487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.546492 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.546537 | orchestrator | 2026-03-16 00:57:16.546544 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-16 00:57:16.546550 | orchestrator | Monday 16 March 2026 00:47:09 +0000 (0:00:00.389) 0:00:55.947 ********** 2026-03-16 00:57:16.546556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.546561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.546567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.546582 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.546588 | orchestrator | 2026-03-16 00:57:16.546594 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-16 00:57:16.546600 | orchestrator | Monday 16 March 2026 00:47:10 +0000 (0:00:00.427) 0:00:56.375 ********** 2026-03-16 00:57:16.546605 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.546611 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.546617 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.546623 | orchestrator | 2026-03-16 00:57:16.546628 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-16 00:57:16.546634 | orchestrator | Monday 16 March 2026 00:47:10 +0000 (0:00:00.470) 0:00:56.845 ********** 2026-03-16 00:57:16.546640 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-16 00:57:16.546646 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-16 00:57:16.546673 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-16 00:57:16.546679 | orchestrator | 2026-03-16 00:57:16.546685 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-16 00:57:16.546691 | orchestrator | Monday 16 March 2026 00:47:11 +0000 (0:00:00.987) 0:00:57.832 ********** 2026-03-16 00:57:16.546697 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-16 00:57:16.546703 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-16 00:57:16.546709 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-16 00:57:16.546715 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-16 00:57:16.546721 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-16 00:57:16.546726 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-16 00:57:16.546732 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-16 00:57:16.546738 | orchestrator | 2026-03-16 00:57:16.546744 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-16 00:57:16.546757 | orchestrator | Monday 16 March 2026 00:47:12 +0000 (0:00:00.803) 0:00:58.636 ********** 2026-03-16 00:57:16.546763 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-16 00:57:16.546769 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-16 00:57:16.546775 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-16 00:57:16.546781 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-16 00:57:16.546786 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-16 00:57:16.546792 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-16 00:57:16.546798 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-16 00:57:16.546804 | orchestrator | 2026-03-16 00:57:16.546809 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-16 00:57:16.546815 | orchestrator | Monday 16 March 2026 00:47:14 +0000 (0:00:01.858) 0:01:00.495 ********** 2026-03-16 00:57:16.546822 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.546829 | orchestrator | 2026-03-16 00:57:16.546835 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-16 00:57:16.546841 | orchestrator | Monday 16 March 2026 00:47:15 +0000 (0:00:01.573) 0:01:02.068 ********** 2026-03-16 00:57:16.546847 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.546853 | orchestrator | 2026-03-16 00:57:16.546859 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-16 00:57:16.546869 | orchestrator | Monday 16 March 2026 00:47:17 +0000 (0:00:01.505) 0:01:03.573 ********** 2026-03-16 00:57:16.546875 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.546881 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.546887 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.546892 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.546953 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.546960 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.546966 | orchestrator | 2026-03-16 00:57:16.546971 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-16 00:57:16.546977 | orchestrator | Monday 16 March 2026 00:47:19 +0000 (0:00:02.045) 0:01:05.619 ********** 2026-03-16 00:57:16.546983 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.546989 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.546995 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.547001 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.547006 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.547012 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.547018 | orchestrator | 2026-03-16 00:57:16.547023 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-16 00:57:16.547029 | orchestrator | Monday 16 March 2026 00:47:20 +0000 (0:00:01.209) 0:01:06.828 ********** 2026-03-16 00:57:16.547035 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.547041 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.547046 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.547052 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.547058 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.547064 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.547069 | orchestrator | 2026-03-16 00:57:16.547075 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-16 00:57:16.547081 | orchestrator | Monday 16 March 2026 00:47:22 +0000 (0:00:02.100) 0:01:08.929 ********** 2026-03-16 00:57:16.547087 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.547092 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.547098 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.547104 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.547109 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.547115 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.547121 | orchestrator | 2026-03-16 00:57:16.547127 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-16 00:57:16.547132 | orchestrator | Monday 16 March 2026 00:47:23 +0000 (0:00:01.063) 0:01:09.993 ********** 2026-03-16 00:57:16.547138 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.547144 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.547149 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.547155 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.547161 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.547202 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.547210 | orchestrator | 2026-03-16 00:57:16.547215 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-16 00:57:16.547221 | orchestrator | Monday 16 March 2026 00:47:25 +0000 (0:00:01.587) 0:01:11.580 ********** 2026-03-16 00:57:16.547227 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.547233 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.547238 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.547260 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.547266 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.547271 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.547277 | orchestrator | 2026-03-16 00:57:16.547283 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-16 00:57:16.547289 | orchestrator | Monday 16 March 2026 00:47:26 +0000 (0:00:01.271) 0:01:12.851 ********** 2026-03-16 00:57:16.547295 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.547300 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.547311 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.547317 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.547323 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.547328 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.547334 | orchestrator | 2026-03-16 00:57:16.547341 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-16 00:57:16.547355 | orchestrator | Monday 16 March 2026 00:47:28 +0000 (0:00:01.680) 0:01:14.531 ********** 2026-03-16 00:57:16.547365 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.547374 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.547382 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.547391 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.547400 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.547409 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.547418 | orchestrator | 2026-03-16 00:57:16.547426 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-16 00:57:16.547437 | orchestrator | Monday 16 March 2026 00:47:29 +0000 (0:00:01.783) 0:01:16.315 ********** 2026-03-16 00:57:16.547443 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.547471 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.547480 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.547495 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.547504 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.547513 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.547521 | orchestrator | 2026-03-16 00:57:16.547530 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-16 00:57:16.547539 | orchestrator | Monday 16 March 2026 00:47:31 +0000 (0:00:01.498) 0:01:17.814 ********** 2026-03-16 00:57:16.547548 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.547557 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.547565 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.547574 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.547583 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.547592 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.547601 | orchestrator | 2026-03-16 00:57:16.547610 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-16 00:57:16.547620 | orchestrator | Monday 16 March 2026 00:47:32 +0000 (0:00:00.746) 0:01:18.560 ********** 2026-03-16 00:57:16.547629 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.547638 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.547648 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.547655 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.547661 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.547666 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.547672 | orchestrator | 2026-03-16 00:57:16.547678 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-16 00:57:16.547684 | orchestrator | Monday 16 March 2026 00:47:33 +0000 (0:00:01.008) 0:01:19.569 ********** 2026-03-16 00:57:16.547689 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.547695 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.547701 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.547706 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.547712 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.547718 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.547724 | orchestrator | 2026-03-16 00:57:16.547729 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-16 00:57:16.547735 | orchestrator | Monday 16 March 2026 00:47:34 +0000 (0:00:01.164) 0:01:20.734 ********** 2026-03-16 00:57:16.547741 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.547747 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.547752 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.547758 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.547764 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.547769 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.547775 | orchestrator | 2026-03-16 00:57:16.547787 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-16 00:57:16.547793 | orchestrator | Monday 16 March 2026 00:47:35 +0000 (0:00:00.886) 0:01:21.620 ********** 2026-03-16 00:57:16.547799 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.547804 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.547810 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.547816 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.547821 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.547827 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.547833 | orchestrator | 2026-03-16 00:57:16.547838 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-16 00:57:16.547844 | orchestrator | Monday 16 March 2026 00:47:36 +0000 (0:00:00.865) 0:01:22.486 ********** 2026-03-16 00:57:16.547850 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.547856 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.547861 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.547867 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.547872 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.547878 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.547884 | orchestrator | 2026-03-16 00:57:16.547890 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-16 00:57:16.547895 | orchestrator | Monday 16 March 2026 00:47:36 +0000 (0:00:00.851) 0:01:23.337 ********** 2026-03-16 00:57:16.547901 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.547907 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.547913 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.547918 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.547938 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.547945 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.547951 | orchestrator | 2026-03-16 00:57:16.547956 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-16 00:57:16.547962 | orchestrator | Monday 16 March 2026 00:47:37 +0000 (0:00:00.722) 0:01:24.060 ********** 2026-03-16 00:57:16.547968 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.547974 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.547980 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.547985 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.547991 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.547997 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.548003 | orchestrator | 2026-03-16 00:57:16.548009 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-16 00:57:16.548014 | orchestrator | Monday 16 March 2026 00:47:38 +0000 (0:00:00.719) 0:01:24.779 ********** 2026-03-16 00:57:16.548020 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.548026 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.548032 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.548037 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.548043 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.548049 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.548054 | orchestrator | 2026-03-16 00:57:16.548060 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-16 00:57:16.548072 | orchestrator | Monday 16 March 2026 00:47:38 +0000 (0:00:00.568) 0:01:25.348 ********** 2026-03-16 00:57:16.548078 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.548083 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.548089 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.548095 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.548101 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.548106 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.548112 | orchestrator | 2026-03-16 00:57:16.548118 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-16 00:57:16.548124 | orchestrator | Monday 16 March 2026 00:47:40 +0000 (0:00:01.163) 0:01:26.511 ********** 2026-03-16 00:57:16.548130 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.548140 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.548146 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.548151 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.548157 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.548163 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.548169 | orchestrator | 2026-03-16 00:57:16.548174 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-16 00:57:16.548180 | orchestrator | Monday 16 March 2026 00:47:41 +0000 (0:00:01.366) 0:01:27.878 ********** 2026-03-16 00:57:16.548186 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.548192 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.548197 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.548203 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.548209 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.548215 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.548220 | orchestrator | 2026-03-16 00:57:16.548226 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-16 00:57:16.548232 | orchestrator | Monday 16 March 2026 00:47:43 +0000 (0:00:02.306) 0:01:30.184 ********** 2026-03-16 00:57:16.548238 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.548245 | orchestrator | 2026-03-16 00:57:16.548251 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-16 00:57:16.548257 | orchestrator | Monday 16 March 2026 00:47:44 +0000 (0:00:01.008) 0:01:31.193 ********** 2026-03-16 00:57:16.548262 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.548268 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.548274 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.548279 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.548285 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.548291 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.548297 | orchestrator | 2026-03-16 00:57:16.548302 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-16 00:57:16.548308 | orchestrator | Monday 16 March 2026 00:47:45 +0000 (0:00:00.633) 0:01:31.827 ********** 2026-03-16 00:57:16.548314 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.548320 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.548327 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.548336 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.548345 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.548355 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.548371 | orchestrator | 2026-03-16 00:57:16.548380 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-16 00:57:16.548389 | orchestrator | Monday 16 March 2026 00:47:46 +0000 (0:00:00.773) 0:01:32.601 ********** 2026-03-16 00:57:16.548398 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-16 00:57:16.548407 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-16 00:57:16.548416 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-16 00:57:16.548425 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-16 00:57:16.548434 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-16 00:57:16.548490 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-16 00:57:16.548503 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-16 00:57:16.548512 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-16 00:57:16.548518 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-16 00:57:16.548524 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-16 00:57:16.548551 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-16 00:57:16.548558 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-16 00:57:16.548564 | orchestrator | 2026-03-16 00:57:16.548570 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-16 00:57:16.548575 | orchestrator | Monday 16 March 2026 00:47:47 +0000 (0:00:01.514) 0:01:34.115 ********** 2026-03-16 00:57:16.548581 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.548587 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.548592 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.548598 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.548604 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.548609 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.548615 | orchestrator | 2026-03-16 00:57:16.548621 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-16 00:57:16.548626 | orchestrator | Monday 16 March 2026 00:47:49 +0000 (0:00:01.839) 0:01:35.954 ********** 2026-03-16 00:57:16.548632 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.548638 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.548643 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.548649 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.548654 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.548665 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.548671 | orchestrator | 2026-03-16 00:57:16.548677 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-16 00:57:16.548682 | orchestrator | Monday 16 March 2026 00:47:50 +0000 (0:00:00.725) 0:01:36.679 ********** 2026-03-16 00:57:16.548688 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.548694 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.548699 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.548705 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.548710 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.548716 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.548722 | orchestrator | 2026-03-16 00:57:16.548728 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-16 00:57:16.548733 | orchestrator | Monday 16 March 2026 00:47:51 +0000 (0:00:00.964) 0:01:37.643 ********** 2026-03-16 00:57:16.548739 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.548744 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.548750 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.548756 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.548761 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.548767 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.548772 | orchestrator | 2026-03-16 00:57:16.548778 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-16 00:57:16.548784 | orchestrator | Monday 16 March 2026 00:47:52 +0000 (0:00:00.789) 0:01:38.432 ********** 2026-03-16 00:57:16.548790 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.548796 | orchestrator | 2026-03-16 00:57:16.548802 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-16 00:57:16.548807 | orchestrator | Monday 16 March 2026 00:47:54 +0000 (0:00:02.586) 0:01:41.019 ********** 2026-03-16 00:57:16.548813 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.548819 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.548824 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.548830 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.548836 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.548841 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.548847 | orchestrator | 2026-03-16 00:57:16.548853 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-16 00:57:16.548863 | orchestrator | Monday 16 March 2026 00:48:38 +0000 (0:00:43.497) 0:02:24.517 ********** 2026-03-16 00:57:16.548869 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-16 00:57:16.548875 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-16 00:57:16.548880 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-16 00:57:16.548886 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.548892 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-16 00:57:16.548898 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-16 00:57:16.548903 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-16 00:57:16.548909 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-16 00:57:16.548915 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-16 00:57:16.548920 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-16 00:57:16.548926 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.548932 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-16 00:57:16.548938 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-16 00:57:16.548943 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-16 00:57:16.548949 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.548955 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-16 00:57:16.548961 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-16 00:57:16.548966 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-16 00:57:16.548972 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.548978 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.548994 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-16 00:57:16.549005 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-16 00:57:16.549015 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-16 00:57:16.549025 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.549035 | orchestrator | 2026-03-16 00:57:16.549045 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-16 00:57:16.549054 | orchestrator | Monday 16 March 2026 00:48:38 +0000 (0:00:00.624) 0:02:25.142 ********** 2026-03-16 00:57:16.549062 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.549073 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.549084 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.549093 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.549104 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.549115 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.549125 | orchestrator | 2026-03-16 00:57:16.549133 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-16 00:57:16.549142 | orchestrator | Monday 16 March 2026 00:48:39 +0000 (0:00:00.703) 0:02:25.846 ********** 2026-03-16 00:57:16.549152 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.549161 | orchestrator | 2026-03-16 00:57:16.549170 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-16 00:57:16.549184 | orchestrator | Monday 16 March 2026 00:48:39 +0000 (0:00:00.159) 0:02:26.005 ********** 2026-03-16 00:57:16.549193 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.549203 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.549213 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.549223 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.549233 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.549255 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.549265 | orchestrator | 2026-03-16 00:57:16.549274 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-16 00:57:16.549284 | orchestrator | Monday 16 March 2026 00:48:40 +0000 (0:00:00.584) 0:02:26.590 ********** 2026-03-16 00:57:16.549293 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.549303 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.549311 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.549319 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.549326 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.549336 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.549345 | orchestrator | 2026-03-16 00:57:16.549354 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-16 00:57:16.549362 | orchestrator | Monday 16 March 2026 00:48:40 +0000 (0:00:00.737) 0:02:27.327 ********** 2026-03-16 00:57:16.549371 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.549380 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.549388 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.549397 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.549406 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.549414 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.549423 | orchestrator | 2026-03-16 00:57:16.549432 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-16 00:57:16.549441 | orchestrator | Monday 16 March 2026 00:48:41 +0000 (0:00:00.643) 0:02:27.970 ********** 2026-03-16 00:57:16.549468 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.549478 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.549486 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.549495 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.549505 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.549515 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.549525 | orchestrator | 2026-03-16 00:57:16.549535 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-16 00:57:16.549545 | orchestrator | Monday 16 March 2026 00:48:44 +0000 (0:00:02.450) 0:02:30.420 ********** 2026-03-16 00:57:16.549554 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.549565 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.549574 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.549584 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.549594 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.549603 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.549613 | orchestrator | 2026-03-16 00:57:16.549624 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-16 00:57:16.549633 | orchestrator | Monday 16 March 2026 00:48:44 +0000 (0:00:00.512) 0:02:30.933 ********** 2026-03-16 00:57:16.549644 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.549657 | orchestrator | 2026-03-16 00:57:16.549665 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-16 00:57:16.549675 | orchestrator | Monday 16 March 2026 00:48:45 +0000 (0:00:00.944) 0:02:31.877 ********** 2026-03-16 00:57:16.549684 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.549693 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.549702 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.549711 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.549719 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.549727 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.549736 | orchestrator | 2026-03-16 00:57:16.549745 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-16 00:57:16.549754 | orchestrator | Monday 16 March 2026 00:48:46 +0000 (0:00:00.845) 0:02:32.723 ********** 2026-03-16 00:57:16.549763 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.549771 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.549794 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.549804 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.549813 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.549821 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.549830 | orchestrator | 2026-03-16 00:57:16.549839 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-16 00:57:16.549847 | orchestrator | Monday 16 March 2026 00:48:46 +0000 (0:00:00.631) 0:02:33.355 ********** 2026-03-16 00:57:16.549856 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.549866 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.549900 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.549910 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.549919 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.549927 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.549936 | orchestrator | 2026-03-16 00:57:16.549945 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-16 00:57:16.549954 | orchestrator | Monday 16 March 2026 00:48:47 +0000 (0:00:00.878) 0:02:34.234 ********** 2026-03-16 00:57:16.549963 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.549972 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.549980 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.549988 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.549997 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.550005 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.550060 | orchestrator | 2026-03-16 00:57:16.550073 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-16 00:57:16.550084 | orchestrator | Monday 16 March 2026 00:48:48 +0000 (0:00:00.642) 0:02:34.877 ********** 2026-03-16 00:57:16.550094 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.550103 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.550113 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.550123 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.550132 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.550142 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.550151 | orchestrator | 2026-03-16 00:57:16.550172 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-16 00:57:16.550183 | orchestrator | Monday 16 March 2026 00:48:49 +0000 (0:00:00.677) 0:02:35.554 ********** 2026-03-16 00:57:16.550193 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.550203 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.550213 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.550224 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.550235 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.550244 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.550254 | orchestrator | 2026-03-16 00:57:16.550264 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-16 00:57:16.550275 | orchestrator | Monday 16 March 2026 00:48:49 +0000 (0:00:00.576) 0:02:36.131 ********** 2026-03-16 00:57:16.550285 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.550296 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.550304 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.550314 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.550323 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.550332 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.550343 | orchestrator | 2026-03-16 00:57:16.550353 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-16 00:57:16.550363 | orchestrator | Monday 16 March 2026 00:48:50 +0000 (0:00:00.697) 0:02:36.829 ********** 2026-03-16 00:57:16.550373 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.550382 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.550392 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.550401 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.550411 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.550433 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.550501 | orchestrator | 2026-03-16 00:57:16.550514 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-16 00:57:16.550525 | orchestrator | Monday 16 March 2026 00:48:51 +0000 (0:00:00.772) 0:02:37.601 ********** 2026-03-16 00:57:16.550534 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.550544 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.550550 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.550555 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.550561 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.550567 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.550573 | orchestrator | 2026-03-16 00:57:16.550578 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-16 00:57:16.550584 | orchestrator | Monday 16 March 2026 00:48:53 +0000 (0:00:01.940) 0:02:39.542 ********** 2026-03-16 00:57:16.550591 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.550597 | orchestrator | 2026-03-16 00:57:16.550603 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-16 00:57:16.550609 | orchestrator | Monday 16 March 2026 00:48:54 +0000 (0:00:01.231) 0:02:40.774 ********** 2026-03-16 00:57:16.550615 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-16 00:57:16.550621 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-16 00:57:16.550627 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-16 00:57:16.550632 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-16 00:57:16.550638 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-16 00:57:16.550644 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-16 00:57:16.550649 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-16 00:57:16.550655 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-16 00:57:16.550661 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-16 00:57:16.550666 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-16 00:57:16.550672 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-16 00:57:16.550678 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-16 00:57:16.550684 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-16 00:57:16.550689 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-16 00:57:16.550695 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-16 00:57:16.550701 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-16 00:57:16.550706 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-16 00:57:16.550712 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-16 00:57:16.550738 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-16 00:57:16.550744 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-16 00:57:16.550750 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-16 00:57:16.550756 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-16 00:57:16.550762 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-16 00:57:16.550768 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-16 00:57:16.550774 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-16 00:57:16.550779 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-16 00:57:16.550785 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-16 00:57:16.550791 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-16 00:57:16.550797 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-16 00:57:16.550802 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-16 00:57:16.550815 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-16 00:57:16.550820 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-16 00:57:16.550826 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-16 00:57:16.550838 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-16 00:57:16.550844 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-16 00:57:16.550850 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-16 00:57:16.550855 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-16 00:57:16.550861 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-16 00:57:16.550867 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-16 00:57:16.550873 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-16 00:57:16.550878 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-16 00:57:16.550884 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-16 00:57:16.550890 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-16 00:57:16.550896 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-16 00:57:16.550901 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-16 00:57:16.550907 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-16 00:57:16.550913 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-16 00:57:16.550918 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-16 00:57:16.550924 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-16 00:57:16.550930 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-16 00:57:16.550936 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-16 00:57:16.550941 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-16 00:57:16.550947 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-16 00:57:16.550952 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-16 00:57:16.550958 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-16 00:57:16.550964 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-16 00:57:16.550970 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-16 00:57:16.550975 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-16 00:57:16.550981 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-16 00:57:16.550987 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-16 00:57:16.550992 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-16 00:57:16.550998 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-16 00:57:16.551004 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-16 00:57:16.551009 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-16 00:57:16.551015 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-16 00:57:16.551021 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-16 00:57:16.551026 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-16 00:57:16.551032 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-16 00:57:16.551038 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-16 00:57:16.551043 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-16 00:57:16.551054 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-16 00:57:16.551060 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-16 00:57:16.551066 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-16 00:57:16.551071 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-16 00:57:16.551077 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-16 00:57:16.551083 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-16 00:57:16.551099 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-16 00:57:16.551105 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-16 00:57:16.551111 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-16 00:57:16.551117 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-16 00:57:16.551123 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-16 00:57:16.551128 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-16 00:57:16.551134 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-16 00:57:16.551140 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-16 00:57:16.551146 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-16 00:57:16.551151 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-16 00:57:16.551157 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-16 00:57:16.551163 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-16 00:57:16.551169 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-16 00:57:16.551175 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-16 00:57:16.551183 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-16 00:57:16.551189 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-16 00:57:16.551195 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-16 00:57:16.551200 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-16 00:57:16.551206 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-16 00:57:16.551212 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-16 00:57:16.551218 | orchestrator | 2026-03-16 00:57:16.551223 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-16 00:57:16.551229 | orchestrator | Monday 16 March 2026 00:49:01 +0000 (0:00:07.223) 0:02:47.998 ********** 2026-03-16 00:57:16.551235 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.551241 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.551246 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.551253 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.551260 | orchestrator | 2026-03-16 00:57:16.551265 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-16 00:57:16.551271 | orchestrator | Monday 16 March 2026 00:49:02 +0000 (0:00:00.830) 0:02:48.828 ********** 2026-03-16 00:57:16.551277 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.551284 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.551290 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.551295 | orchestrator | 2026-03-16 00:57:16.551301 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-16 00:57:16.551311 | orchestrator | Monday 16 March 2026 00:49:03 +0000 (0:00:01.013) 0:02:49.842 ********** 2026-03-16 00:57:16.551317 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.551323 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.551329 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.551335 | orchestrator | 2026-03-16 00:57:16.551342 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-16 00:57:16.551352 | orchestrator | Monday 16 March 2026 00:49:05 +0000 (0:00:01.634) 0:02:51.476 ********** 2026-03-16 00:57:16.551361 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.551370 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.551381 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.551391 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.551402 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.551410 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.551415 | orchestrator | 2026-03-16 00:57:16.551421 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-16 00:57:16.551430 | orchestrator | Monday 16 March 2026 00:49:05 +0000 (0:00:00.513) 0:02:51.989 ********** 2026-03-16 00:57:16.551440 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.551468 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.551478 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.551487 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.551496 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.551505 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.551514 | orchestrator | 2026-03-16 00:57:16.551524 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-16 00:57:16.551534 | orchestrator | Monday 16 March 2026 00:49:06 +0000 (0:00:00.804) 0:02:52.794 ********** 2026-03-16 00:57:16.551543 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.551552 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.551562 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.551571 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.551581 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.551590 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.551600 | orchestrator | 2026-03-16 00:57:16.551626 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-16 00:57:16.551637 | orchestrator | Monday 16 March 2026 00:49:07 +0000 (0:00:00.682) 0:02:53.476 ********** 2026-03-16 00:57:16.551647 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.551656 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.551665 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.551681 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.551693 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.551701 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.551710 | orchestrator | 2026-03-16 00:57:16.551718 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-16 00:57:16.551726 | orchestrator | Monday 16 March 2026 00:49:08 +0000 (0:00:01.425) 0:02:54.901 ********** 2026-03-16 00:57:16.551735 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.551743 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.551752 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.551760 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.551768 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.551777 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.551786 | orchestrator | 2026-03-16 00:57:16.551795 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-16 00:57:16.551806 | orchestrator | Monday 16 March 2026 00:49:09 +0000 (0:00:00.660) 0:02:55.561 ********** 2026-03-16 00:57:16.551824 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.551835 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.551841 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.551847 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.551852 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.551858 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.551864 | orchestrator | 2026-03-16 00:57:16.551869 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-16 00:57:16.551875 | orchestrator | Monday 16 March 2026 00:49:09 +0000 (0:00:00.735) 0:02:56.296 ********** 2026-03-16 00:57:16.551881 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.551887 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.551892 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.551898 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.551903 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.551909 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.551915 | orchestrator | 2026-03-16 00:57:16.551920 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-16 00:57:16.551926 | orchestrator | Monday 16 March 2026 00:49:10 +0000 (0:00:00.625) 0:02:56.922 ********** 2026-03-16 00:57:16.551932 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.551938 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.551943 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.551949 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.551955 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.551960 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.551966 | orchestrator | 2026-03-16 00:57:16.551972 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-16 00:57:16.551977 | orchestrator | Monday 16 March 2026 00:49:11 +0000 (0:00:00.736) 0:02:57.659 ********** 2026-03-16 00:57:16.551983 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.551989 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.551995 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552000 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.552006 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.552012 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.552017 | orchestrator | 2026-03-16 00:57:16.552023 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-16 00:57:16.552029 | orchestrator | Monday 16 March 2026 00:49:14 +0000 (0:00:03.019) 0:03:00.678 ********** 2026-03-16 00:57:16.552035 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.552040 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.552046 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.552052 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.552057 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.552063 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552069 | orchestrator | 2026-03-16 00:57:16.552074 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-16 00:57:16.552080 | orchestrator | Monday 16 March 2026 00:49:15 +0000 (0:00:00.872) 0:03:01.551 ********** 2026-03-16 00:57:16.552086 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.552092 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.552097 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.552103 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.552109 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.552114 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552120 | orchestrator | 2026-03-16 00:57:16.552126 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-16 00:57:16.552131 | orchestrator | Monday 16 March 2026 00:49:15 +0000 (0:00:00.790) 0:03:02.341 ********** 2026-03-16 00:57:16.552137 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.552143 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.552148 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.552159 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.552165 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.552170 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552176 | orchestrator | 2026-03-16 00:57:16.552182 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-16 00:57:16.552187 | orchestrator | Monday 16 March 2026 00:49:16 +0000 (0:00:00.926) 0:03:03.268 ********** 2026-03-16 00:57:16.552193 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.552199 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.552205 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.552211 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.552238 | orchestrator | 2026-03-16 00:57:16 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:16.552245 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.552251 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552256 | orchestrator | 2026-03-16 00:57:16.552262 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-16 00:57:16.552268 | orchestrator | Monday 16 March 2026 00:49:17 +0000 (0:00:00.683) 0:03:03.951 ********** 2026-03-16 00:57:16.552276 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-16 00:57:16.552289 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-16 00:57:16.552296 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-16 00:57:16.552302 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-16 00:57:16.552308 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.552314 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.552320 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-16 00:57:16.552326 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-16 00:57:16.552335 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.552348 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.552363 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.552373 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552383 | orchestrator | 2026-03-16 00:57:16.552400 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-16 00:57:16.552410 | orchestrator | Monday 16 March 2026 00:49:18 +0000 (0:00:00.893) 0:03:04.844 ********** 2026-03-16 00:57:16.552420 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.552430 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.552441 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.552495 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.552502 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.552508 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552513 | orchestrator | 2026-03-16 00:57:16.552519 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-16 00:57:16.552525 | orchestrator | Monday 16 March 2026 00:49:19 +0000 (0:00:00.623) 0:03:05.468 ********** 2026-03-16 00:57:16.552531 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.552536 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.552542 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.552548 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.552553 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.552559 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552565 | orchestrator | 2026-03-16 00:57:16.552571 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-16 00:57:16.552576 | orchestrator | Monday 16 March 2026 00:49:19 +0000 (0:00:00.716) 0:03:06.185 ********** 2026-03-16 00:57:16.552582 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.552588 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.552594 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.552599 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.552605 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.552611 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552616 | orchestrator | 2026-03-16 00:57:16.552622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-16 00:57:16.552628 | orchestrator | Monday 16 March 2026 00:49:20 +0000 (0:00:00.684) 0:03:06.869 ********** 2026-03-16 00:57:16.552634 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.552639 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.552645 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.552652 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.552662 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.552687 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552696 | orchestrator | 2026-03-16 00:57:16.552705 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-16 00:57:16.552714 | orchestrator | Monday 16 March 2026 00:49:21 +0000 (0:00:01.014) 0:03:07.883 ********** 2026-03-16 00:57:16.552723 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.552731 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.552739 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.552748 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.552758 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.552768 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552777 | orchestrator | 2026-03-16 00:57:16.552787 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-16 00:57:16.552797 | orchestrator | Monday 16 March 2026 00:49:22 +0000 (0:00:00.646) 0:03:08.530 ********** 2026-03-16 00:57:16.552805 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.552811 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.552816 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.552822 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.552828 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.552833 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.552839 | orchestrator | 2026-03-16 00:57:16.552844 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-16 00:57:16.552850 | orchestrator | Monday 16 March 2026 00:49:23 +0000 (0:00:01.178) 0:03:09.709 ********** 2026-03-16 00:57:16.552867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.552873 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.552878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.552884 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.552889 | orchestrator | 2026-03-16 00:57:16.552894 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-16 00:57:16.552900 | orchestrator | Monday 16 March 2026 00:49:23 +0000 (0:00:00.399) 0:03:10.108 ********** 2026-03-16 00:57:16.552905 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.552910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.552916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.552921 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.552926 | orchestrator | 2026-03-16 00:57:16.552932 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-16 00:57:16.552937 | orchestrator | Monday 16 March 2026 00:49:24 +0000 (0:00:00.527) 0:03:10.635 ********** 2026-03-16 00:57:16.552942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.552948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.552953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.552959 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.552964 | orchestrator | 2026-03-16 00:57:16.552969 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-16 00:57:16.552975 | orchestrator | Monday 16 March 2026 00:49:24 +0000 (0:00:00.345) 0:03:10.981 ********** 2026-03-16 00:57:16.552980 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.552985 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.552991 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.552996 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.553001 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.553007 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.553012 | orchestrator | 2026-03-16 00:57:16.553017 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-16 00:57:16.553023 | orchestrator | Monday 16 March 2026 00:49:25 +0000 (0:00:00.538) 0:03:11.520 ********** 2026-03-16 00:57:16.553028 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-16 00:57:16.553034 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-16 00:57:16.553039 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.553044 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-16 00:57:16.553050 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.553055 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-16 00:57:16.553060 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-16 00:57:16.553065 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-16 00:57:16.553071 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.553076 | orchestrator | 2026-03-16 00:57:16.553081 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-16 00:57:16.553087 | orchestrator | Monday 16 March 2026 00:49:27 +0000 (0:00:02.320) 0:03:13.841 ********** 2026-03-16 00:57:16.553092 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.553098 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.553103 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.553108 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.553113 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.553119 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.553124 | orchestrator | 2026-03-16 00:57:16.553129 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-16 00:57:16.553135 | orchestrator | Monday 16 March 2026 00:49:30 +0000 (0:00:02.977) 0:03:16.818 ********** 2026-03-16 00:57:16.553140 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.553157 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.553162 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.553167 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.553173 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.553178 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.553183 | orchestrator | 2026-03-16 00:57:16.553189 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-16 00:57:16.553194 | orchestrator | Monday 16 March 2026 00:49:31 +0000 (0:00:01.038) 0:03:17.856 ********** 2026-03-16 00:57:16.553199 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553205 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.553210 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.553215 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.553221 | orchestrator | 2026-03-16 00:57:16.553239 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-16 00:57:16.553245 | orchestrator | Monday 16 March 2026 00:49:32 +0000 (0:00:00.901) 0:03:18.758 ********** 2026-03-16 00:57:16.553251 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.553256 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.553261 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.553266 | orchestrator | 2026-03-16 00:57:16.553272 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-16 00:57:16.553277 | orchestrator | Monday 16 March 2026 00:49:32 +0000 (0:00:00.295) 0:03:19.053 ********** 2026-03-16 00:57:16.553283 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.553288 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.553293 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.553299 | orchestrator | 2026-03-16 00:57:16.553304 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-16 00:57:16.553309 | orchestrator | Monday 16 March 2026 00:49:34 +0000 (0:00:01.422) 0:03:20.476 ********** 2026-03-16 00:57:16.553315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-16 00:57:16.553320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-16 00:57:16.553329 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-16 00:57:16.553338 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.553348 | orchestrator | 2026-03-16 00:57:16.553362 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-16 00:57:16.553370 | orchestrator | Monday 16 March 2026 00:49:34 +0000 (0:00:00.589) 0:03:21.065 ********** 2026-03-16 00:57:16.553378 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.553387 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.553395 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.553404 | orchestrator | 2026-03-16 00:57:16.553414 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-16 00:57:16.553424 | orchestrator | Monday 16 March 2026 00:49:34 +0000 (0:00:00.303) 0:03:21.369 ********** 2026-03-16 00:57:16.553434 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.553459 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.553465 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.553470 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.553476 | orchestrator | 2026-03-16 00:57:16.553481 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-16 00:57:16.553487 | orchestrator | Monday 16 March 2026 00:49:35 +0000 (0:00:00.907) 0:03:22.277 ********** 2026-03-16 00:57:16.553492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.553497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.553503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.553508 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553513 | orchestrator | 2026-03-16 00:57:16.553524 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-16 00:57:16.553530 | orchestrator | Monday 16 March 2026 00:49:36 +0000 (0:00:00.384) 0:03:22.662 ********** 2026-03-16 00:57:16.553535 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553541 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.553546 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.553551 | orchestrator | 2026-03-16 00:57:16.553557 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-16 00:57:16.553562 | orchestrator | Monday 16 March 2026 00:49:36 +0000 (0:00:00.280) 0:03:22.943 ********** 2026-03-16 00:57:16.553568 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553573 | orchestrator | 2026-03-16 00:57:16.553579 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-16 00:57:16.553584 | orchestrator | Monday 16 March 2026 00:49:36 +0000 (0:00:00.194) 0:03:23.138 ********** 2026-03-16 00:57:16.553589 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553595 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.553600 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.553605 | orchestrator | 2026-03-16 00:57:16.553611 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-16 00:57:16.553616 | orchestrator | Monday 16 March 2026 00:49:37 +0000 (0:00:00.273) 0:03:23.412 ********** 2026-03-16 00:57:16.553621 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553627 | orchestrator | 2026-03-16 00:57:16.553632 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-16 00:57:16.553638 | orchestrator | Monday 16 March 2026 00:49:37 +0000 (0:00:00.193) 0:03:23.605 ********** 2026-03-16 00:57:16.553643 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553648 | orchestrator | 2026-03-16 00:57:16.553654 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-16 00:57:16.553659 | orchestrator | Monday 16 March 2026 00:49:37 +0000 (0:00:00.216) 0:03:23.822 ********** 2026-03-16 00:57:16.553664 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553670 | orchestrator | 2026-03-16 00:57:16.553675 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-16 00:57:16.553680 | orchestrator | Monday 16 March 2026 00:49:37 +0000 (0:00:00.116) 0:03:23.938 ********** 2026-03-16 00:57:16.553686 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553691 | orchestrator | 2026-03-16 00:57:16.553696 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-16 00:57:16.553702 | orchestrator | Monday 16 March 2026 00:49:38 +0000 (0:00:00.562) 0:03:24.501 ********** 2026-03-16 00:57:16.553707 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553713 | orchestrator | 2026-03-16 00:57:16.553718 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-16 00:57:16.553723 | orchestrator | Monday 16 March 2026 00:49:38 +0000 (0:00:00.216) 0:03:24.718 ********** 2026-03-16 00:57:16.553729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.553734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.553740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.553745 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553751 | orchestrator | 2026-03-16 00:57:16.553767 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-16 00:57:16.553773 | orchestrator | Monday 16 March 2026 00:49:38 +0000 (0:00:00.373) 0:03:25.091 ********** 2026-03-16 00:57:16.553779 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553784 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.553789 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.553795 | orchestrator | 2026-03-16 00:57:16.553800 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-16 00:57:16.553805 | orchestrator | Monday 16 March 2026 00:49:38 +0000 (0:00:00.237) 0:03:25.328 ********** 2026-03-16 00:57:16.553811 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553820 | orchestrator | 2026-03-16 00:57:16.553826 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-16 00:57:16.553831 | orchestrator | Monday 16 March 2026 00:49:39 +0000 (0:00:00.201) 0:03:25.530 ********** 2026-03-16 00:57:16.553836 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553842 | orchestrator | 2026-03-16 00:57:16.553847 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-16 00:57:16.553852 | orchestrator | Monday 16 March 2026 00:49:39 +0000 (0:00:00.202) 0:03:25.733 ********** 2026-03-16 00:57:16.553858 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.553863 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.553872 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.553878 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.553883 | orchestrator | 2026-03-16 00:57:16.553888 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-16 00:57:16.553894 | orchestrator | Monday 16 March 2026 00:49:40 +0000 (0:00:00.888) 0:03:26.621 ********** 2026-03-16 00:57:16.553899 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.553904 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.553910 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.553915 | orchestrator | 2026-03-16 00:57:16.553920 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-16 00:57:16.553926 | orchestrator | Monday 16 March 2026 00:49:40 +0000 (0:00:00.395) 0:03:27.016 ********** 2026-03-16 00:57:16.553931 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.553936 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.553942 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.553947 | orchestrator | 2026-03-16 00:57:16.553952 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-16 00:57:16.553958 | orchestrator | Monday 16 March 2026 00:49:41 +0000 (0:00:01.227) 0:03:28.244 ********** 2026-03-16 00:57:16.553963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.553968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.553974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.553979 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.553984 | orchestrator | 2026-03-16 00:57:16.553990 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-16 00:57:16.553995 | orchestrator | Monday 16 March 2026 00:49:42 +0000 (0:00:00.679) 0:03:28.923 ********** 2026-03-16 00:57:16.554001 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.554006 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.554038 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.554046 | orchestrator | 2026-03-16 00:57:16.554051 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-16 00:57:16.554057 | orchestrator | Monday 16 March 2026 00:49:42 +0000 (0:00:00.423) 0:03:29.346 ********** 2026-03-16 00:57:16.554062 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.554067 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.554073 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.554078 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.554083 | orchestrator | 2026-03-16 00:57:16.554089 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-16 00:57:16.554094 | orchestrator | Monday 16 March 2026 00:49:43 +0000 (0:00:00.822) 0:03:30.169 ********** 2026-03-16 00:57:16.554100 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.554105 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.554110 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.554116 | orchestrator | 2026-03-16 00:57:16.554121 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-16 00:57:16.554126 | orchestrator | Monday 16 March 2026 00:49:44 +0000 (0:00:00.439) 0:03:30.608 ********** 2026-03-16 00:57:16.554136 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.554142 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.554147 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.554152 | orchestrator | 2026-03-16 00:57:16.554158 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-16 00:57:16.554163 | orchestrator | Monday 16 March 2026 00:49:45 +0000 (0:00:01.325) 0:03:31.934 ********** 2026-03-16 00:57:16.554168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.554174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.554179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.554184 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.554190 | orchestrator | 2026-03-16 00:57:16.554195 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-16 00:57:16.554201 | orchestrator | Monday 16 March 2026 00:49:46 +0000 (0:00:00.550) 0:03:32.485 ********** 2026-03-16 00:57:16.554206 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.554211 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.554217 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.554222 | orchestrator | 2026-03-16 00:57:16.554227 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-16 00:57:16.554233 | orchestrator | Monday 16 March 2026 00:49:46 +0000 (0:00:00.285) 0:03:32.771 ********** 2026-03-16 00:57:16.554238 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.554243 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.554259 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.554265 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.554271 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.554276 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.554281 | orchestrator | 2026-03-16 00:57:16.554287 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-16 00:57:16.554292 | orchestrator | Monday 16 March 2026 00:49:47 +0000 (0:00:00.686) 0:03:33.457 ********** 2026-03-16 00:57:16.554298 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.554303 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.554308 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.554314 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.554319 | orchestrator | 2026-03-16 00:57:16.554324 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-16 00:57:16.554330 | orchestrator | Monday 16 March 2026 00:49:47 +0000 (0:00:00.716) 0:03:34.174 ********** 2026-03-16 00:57:16.554335 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.554340 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.554346 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.554351 | orchestrator | 2026-03-16 00:57:16.554356 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-16 00:57:16.554365 | orchestrator | Monday 16 March 2026 00:49:48 +0000 (0:00:00.500) 0:03:34.675 ********** 2026-03-16 00:57:16.554371 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.554378 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.554388 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.554396 | orchestrator | 2026-03-16 00:57:16.554409 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-16 00:57:16.554420 | orchestrator | Monday 16 March 2026 00:49:49 +0000 (0:00:01.322) 0:03:35.997 ********** 2026-03-16 00:57:16.554428 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-16 00:57:16.554438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-16 00:57:16.554470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-16 00:57:16.554480 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.554488 | orchestrator | 2026-03-16 00:57:16.554497 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-16 00:57:16.554512 | orchestrator | Monday 16 March 2026 00:49:50 +0000 (0:00:00.598) 0:03:36.596 ********** 2026-03-16 00:57:16.554521 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.554532 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.554541 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.554549 | orchestrator | 2026-03-16 00:57:16.554558 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-16 00:57:16.554568 | orchestrator | 2026-03-16 00:57:16.554577 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-16 00:57:16.554587 | orchestrator | Monday 16 March 2026 00:49:50 +0000 (0:00:00.578) 0:03:37.174 ********** 2026-03-16 00:57:16.554597 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.554607 | orchestrator | 2026-03-16 00:57:16.554616 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-16 00:57:16.554625 | orchestrator | Monday 16 March 2026 00:49:51 +0000 (0:00:00.720) 0:03:37.895 ********** 2026-03-16 00:57:16.554631 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.554637 | orchestrator | 2026-03-16 00:57:16.554642 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-16 00:57:16.554647 | orchestrator | Monday 16 March 2026 00:49:52 +0000 (0:00:00.479) 0:03:38.374 ********** 2026-03-16 00:57:16.554653 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.554658 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.554663 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.554669 | orchestrator | 2026-03-16 00:57:16.554674 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-16 00:57:16.554679 | orchestrator | Monday 16 March 2026 00:49:52 +0000 (0:00:00.858) 0:03:39.233 ********** 2026-03-16 00:57:16.554685 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.554690 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.554695 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.554701 | orchestrator | 2026-03-16 00:57:16.554706 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-16 00:57:16.554711 | orchestrator | Monday 16 March 2026 00:49:53 +0000 (0:00:00.366) 0:03:39.600 ********** 2026-03-16 00:57:16.554717 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.554722 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.554727 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.554733 | orchestrator | 2026-03-16 00:57:16.554738 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-16 00:57:16.554743 | orchestrator | Monday 16 March 2026 00:49:53 +0000 (0:00:00.350) 0:03:39.950 ********** 2026-03-16 00:57:16.554748 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.554754 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.554759 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.554764 | orchestrator | 2026-03-16 00:57:16.554770 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-16 00:57:16.554775 | orchestrator | Monday 16 March 2026 00:49:53 +0000 (0:00:00.325) 0:03:40.276 ********** 2026-03-16 00:57:16.554780 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.554786 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.554791 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.554796 | orchestrator | 2026-03-16 00:57:16.554802 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-16 00:57:16.554807 | orchestrator | Monday 16 March 2026 00:49:54 +0000 (0:00:00.914) 0:03:41.190 ********** 2026-03-16 00:57:16.554812 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.554818 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.554823 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.554828 | orchestrator | 2026-03-16 00:57:16.554849 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-16 00:57:16.554860 | orchestrator | Monday 16 March 2026 00:49:55 +0000 (0:00:00.266) 0:03:41.457 ********** 2026-03-16 00:57:16.554866 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.554871 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.554876 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.554881 | orchestrator | 2026-03-16 00:57:16.554887 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-16 00:57:16.554892 | orchestrator | Monday 16 March 2026 00:49:55 +0000 (0:00:00.239) 0:03:41.696 ********** 2026-03-16 00:57:16.554897 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.554903 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.554908 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.554913 | orchestrator | 2026-03-16 00:57:16.554919 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-16 00:57:16.554924 | orchestrator | Monday 16 March 2026 00:49:56 +0000 (0:00:00.720) 0:03:42.416 ********** 2026-03-16 00:57:16.554929 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.554935 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.554940 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.554945 | orchestrator | 2026-03-16 00:57:16.554951 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-16 00:57:16.554956 | orchestrator | Monday 16 March 2026 00:49:57 +0000 (0:00:00.993) 0:03:43.410 ********** 2026-03-16 00:57:16.554961 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.554971 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.554977 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.554982 | orchestrator | 2026-03-16 00:57:16.554987 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-16 00:57:16.554993 | orchestrator | Monday 16 March 2026 00:49:57 +0000 (0:00:00.243) 0:03:43.654 ********** 2026-03-16 00:57:16.554998 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.555003 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.555009 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.555014 | orchestrator | 2026-03-16 00:57:16.555019 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-16 00:57:16.555024 | orchestrator | Monday 16 March 2026 00:49:57 +0000 (0:00:00.246) 0:03:43.900 ********** 2026-03-16 00:57:16.555030 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.555035 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.555041 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.555046 | orchestrator | 2026-03-16 00:57:16.555051 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-16 00:57:16.555057 | orchestrator | Monday 16 March 2026 00:49:57 +0000 (0:00:00.253) 0:03:44.153 ********** 2026-03-16 00:57:16.555062 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.555067 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.555072 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.555078 | orchestrator | 2026-03-16 00:57:16.555083 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-16 00:57:16.555088 | orchestrator | Monday 16 March 2026 00:49:58 +0000 (0:00:00.277) 0:03:44.431 ********** 2026-03-16 00:57:16.555094 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.555099 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.555104 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.555109 | orchestrator | 2026-03-16 00:57:16.555115 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-16 00:57:16.555120 | orchestrator | Monday 16 March 2026 00:49:58 +0000 (0:00:00.551) 0:03:44.983 ********** 2026-03-16 00:57:16.555125 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.555131 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.555136 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.555141 | orchestrator | 2026-03-16 00:57:16.555146 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-16 00:57:16.555152 | orchestrator | Monday 16 March 2026 00:49:58 +0000 (0:00:00.282) 0:03:45.265 ********** 2026-03-16 00:57:16.555161 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.555167 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.555172 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.555177 | orchestrator | 2026-03-16 00:57:16.555183 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-16 00:57:16.555188 | orchestrator | Monday 16 March 2026 00:49:59 +0000 (0:00:00.274) 0:03:45.540 ********** 2026-03-16 00:57:16.555193 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.555199 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.555204 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.555209 | orchestrator | 2026-03-16 00:57:16.555215 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-16 00:57:16.555220 | orchestrator | Monday 16 March 2026 00:49:59 +0000 (0:00:00.397) 0:03:45.938 ********** 2026-03-16 00:57:16.555225 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.555231 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.555236 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.555241 | orchestrator | 2026-03-16 00:57:16.555247 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-16 00:57:16.555252 | orchestrator | Monday 16 March 2026 00:50:00 +0000 (0:00:00.727) 0:03:46.665 ********** 2026-03-16 00:57:16.555257 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.555262 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.555268 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.555273 | orchestrator | 2026-03-16 00:57:16.555278 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-16 00:57:16.555284 | orchestrator | Monday 16 March 2026 00:50:00 +0000 (0:00:00.565) 0:03:47.230 ********** 2026-03-16 00:57:16.555289 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.555294 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.555300 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.555305 | orchestrator | 2026-03-16 00:57:16.555310 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-16 00:57:16.555316 | orchestrator | Monday 16 March 2026 00:50:01 +0000 (0:00:00.330) 0:03:47.561 ********** 2026-03-16 00:57:16.555321 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.555327 | orchestrator | 2026-03-16 00:57:16.555332 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-16 00:57:16.555348 | orchestrator | Monday 16 March 2026 00:50:02 +0000 (0:00:00.910) 0:03:48.472 ********** 2026-03-16 00:57:16.555354 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.555359 | orchestrator | 2026-03-16 00:57:16.555365 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-16 00:57:16.555370 | orchestrator | Monday 16 March 2026 00:50:02 +0000 (0:00:00.169) 0:03:48.641 ********** 2026-03-16 00:57:16.555375 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-16 00:57:16.555381 | orchestrator | 2026-03-16 00:57:16.555386 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-16 00:57:16.555392 | orchestrator | Monday 16 March 2026 00:50:03 +0000 (0:00:01.020) 0:03:49.661 ********** 2026-03-16 00:57:16.555397 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.555402 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.555408 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.555413 | orchestrator | 2026-03-16 00:57:16.555418 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-16 00:57:16.555424 | orchestrator | Monday 16 March 2026 00:50:03 +0000 (0:00:00.363) 0:03:50.024 ********** 2026-03-16 00:57:16.555429 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.555435 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.555440 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.555461 | orchestrator | 2026-03-16 00:57:16.555471 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-16 00:57:16.555484 | orchestrator | Monday 16 March 2026 00:50:04 +0000 (0:00:00.366) 0:03:50.391 ********** 2026-03-16 00:57:16.555499 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.555507 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.555513 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.555518 | orchestrator | 2026-03-16 00:57:16.555524 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-16 00:57:16.555529 | orchestrator | Monday 16 March 2026 00:50:05 +0000 (0:00:01.413) 0:03:51.804 ********** 2026-03-16 00:57:16.555534 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.555540 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.555545 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.555550 | orchestrator | 2026-03-16 00:57:16.555556 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-16 00:57:16.555561 | orchestrator | Monday 16 March 2026 00:50:06 +0000 (0:00:00.863) 0:03:52.667 ********** 2026-03-16 00:57:16.555567 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.555572 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.555578 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.555587 | orchestrator | 2026-03-16 00:57:16.555595 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-16 00:57:16.555603 | orchestrator | Monday 16 March 2026 00:50:07 +0000 (0:00:00.739) 0:03:53.407 ********** 2026-03-16 00:57:16.555613 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.555621 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.555629 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.555637 | orchestrator | 2026-03-16 00:57:16.555645 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-16 00:57:16.555653 | orchestrator | Monday 16 March 2026 00:50:07 +0000 (0:00:00.708) 0:03:54.115 ********** 2026-03-16 00:57:16.555662 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.555669 | orchestrator | 2026-03-16 00:57:16.555677 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-16 00:57:16.555685 | orchestrator | Monday 16 March 2026 00:50:09 +0000 (0:00:01.498) 0:03:55.614 ********** 2026-03-16 00:57:16.555693 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.555701 | orchestrator | 2026-03-16 00:57:16.555709 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-16 00:57:16.555716 | orchestrator | Monday 16 March 2026 00:50:09 +0000 (0:00:00.605) 0:03:56.219 ********** 2026-03-16 00:57:16.555726 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.555734 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-16 00:57:16.555743 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.555752 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-16 00:57:16.555760 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-16 00:57:16.555768 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-16 00:57:16.555777 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-16 00:57:16.555785 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-03-16 00:57:16.555793 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-16 00:57:16.555802 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-16 00:57:16.555810 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-16 00:57:16.555819 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-16 00:57:16.555828 | orchestrator | 2026-03-16 00:57:16.555838 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-16 00:57:16.555844 | orchestrator | Monday 16 March 2026 00:50:13 +0000 (0:00:03.602) 0:03:59.821 ********** 2026-03-16 00:57:16.555849 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.555855 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.555860 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.555865 | orchestrator | 2026-03-16 00:57:16.555871 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-16 00:57:16.555882 | orchestrator | Monday 16 March 2026 00:50:14 +0000 (0:00:01.474) 0:04:01.296 ********** 2026-03-16 00:57:16.555887 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.555893 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.555898 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.555904 | orchestrator | 2026-03-16 00:57:16.555909 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-16 00:57:16.555914 | orchestrator | Monday 16 March 2026 00:50:15 +0000 (0:00:00.307) 0:04:01.604 ********** 2026-03-16 00:57:16.555920 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.555925 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.555930 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.555936 | orchestrator | 2026-03-16 00:57:16.555955 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-16 00:57:16.555962 | orchestrator | Monday 16 March 2026 00:50:15 +0000 (0:00:00.574) 0:04:02.178 ********** 2026-03-16 00:57:16.555967 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.555972 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.555980 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.555992 | orchestrator | 2026-03-16 00:57:16.556004 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-16 00:57:16.556013 | orchestrator | Monday 16 March 2026 00:50:17 +0000 (0:00:01.930) 0:04:04.109 ********** 2026-03-16 00:57:16.556022 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.556031 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.556039 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.556048 | orchestrator | 2026-03-16 00:57:16.556056 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-16 00:57:16.556065 | orchestrator | Monday 16 March 2026 00:50:19 +0000 (0:00:01.344) 0:04:05.453 ********** 2026-03-16 00:57:16.556074 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.556083 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.556091 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.556100 | orchestrator | 2026-03-16 00:57:16.556108 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-16 00:57:16.556118 | orchestrator | Monday 16 March 2026 00:50:19 +0000 (0:00:00.521) 0:04:05.974 ********** 2026-03-16 00:57:16.556133 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.556142 | orchestrator | 2026-03-16 00:57:16.556152 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-16 00:57:16.556162 | orchestrator | Monday 16 March 2026 00:50:20 +0000 (0:00:01.090) 0:04:07.064 ********** 2026-03-16 00:57:16.556171 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.556178 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.556186 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.556195 | orchestrator | 2026-03-16 00:57:16.556203 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-16 00:57:16.556208 | orchestrator | Monday 16 March 2026 00:50:21 +0000 (0:00:00.378) 0:04:07.443 ********** 2026-03-16 00:57:16.556213 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.556219 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.556224 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.556229 | orchestrator | 2026-03-16 00:57:16.556234 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-16 00:57:16.556240 | orchestrator | Monday 16 March 2026 00:50:21 +0000 (0:00:00.359) 0:04:07.803 ********** 2026-03-16 00:57:16.556245 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.556251 | orchestrator | 2026-03-16 00:57:16.556256 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-16 00:57:16.556262 | orchestrator | Monday 16 March 2026 00:50:22 +0000 (0:00:00.974) 0:04:08.777 ********** 2026-03-16 00:57:16.556279 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.556284 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.556290 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.556295 | orchestrator | 2026-03-16 00:57:16.556300 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-16 00:57:16.556306 | orchestrator | Monday 16 March 2026 00:50:24 +0000 (0:00:01.889) 0:04:10.667 ********** 2026-03-16 00:57:16.556311 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.556316 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.556322 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.556327 | orchestrator | 2026-03-16 00:57:16.556332 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-16 00:57:16.556338 | orchestrator | Monday 16 March 2026 00:50:25 +0000 (0:00:01.122) 0:04:11.789 ********** 2026-03-16 00:57:16.556343 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.556348 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.556354 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.556359 | orchestrator | 2026-03-16 00:57:16.556364 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-16 00:57:16.556370 | orchestrator | Monday 16 March 2026 00:50:27 +0000 (0:00:02.240) 0:04:14.030 ********** 2026-03-16 00:57:16.556375 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.556380 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.556386 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.556391 | orchestrator | 2026-03-16 00:57:16.556396 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-16 00:57:16.556401 | orchestrator | Monday 16 March 2026 00:50:29 +0000 (0:00:02.259) 0:04:16.290 ********** 2026-03-16 00:57:16.556407 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-16 00:57:16.556412 | orchestrator | 2026-03-16 00:57:16.556417 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-16 00:57:16.556423 | orchestrator | Monday 16 March 2026 00:50:30 +0000 (0:00:00.812) 0:04:17.102 ********** 2026-03-16 00:57:16.556428 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.556433 | orchestrator | 2026-03-16 00:57:16.556440 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-16 00:57:16.556502 | orchestrator | Monday 16 March 2026 00:50:32 +0000 (0:00:01.349) 0:04:18.452 ********** 2026-03-16 00:57:16.556513 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.556521 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.556529 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.556536 | orchestrator | 2026-03-16 00:57:16.556545 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-16 00:57:16.556554 | orchestrator | Monday 16 March 2026 00:50:41 +0000 (0:00:09.609) 0:04:28.061 ********** 2026-03-16 00:57:16.556563 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.556571 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.556580 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.556589 | orchestrator | 2026-03-16 00:57:16.556598 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-16 00:57:16.556624 | orchestrator | Monday 16 March 2026 00:50:42 +0000 (0:00:00.491) 0:04:28.552 ********** 2026-03-16 00:57:16.556633 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__07c2a99ebb56b4da2cc35985971bb0c7bbd9adf3'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-16 00:57:16.556648 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__07c2a99ebb56b4da2cc35985971bb0c7bbd9adf3'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-16 00:57:16.556675 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__07c2a99ebb56b4da2cc35985971bb0c7bbd9adf3'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-16 00:57:16.556686 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__07c2a99ebb56b4da2cc35985971bb0c7bbd9adf3'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-16 00:57:16.556697 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__07c2a99ebb56b4da2cc35985971bb0c7bbd9adf3'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-16 00:57:16.556710 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__07c2a99ebb56b4da2cc35985971bb0c7bbd9adf3'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__07c2a99ebb56b4da2cc35985971bb0c7bbd9adf3'}])  2026-03-16 00:57:16.556723 | orchestrator | 2026-03-16 00:57:16.556736 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-16 00:57:16.556744 | orchestrator | Monday 16 March 2026 00:50:57 +0000 (0:00:14.833) 0:04:43.386 ********** 2026-03-16 00:57:16.556753 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.556761 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.556771 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.556778 | orchestrator | 2026-03-16 00:57:16.556786 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-16 00:57:16.556795 | orchestrator | Monday 16 March 2026 00:50:57 +0000 (0:00:00.313) 0:04:43.699 ********** 2026-03-16 00:57:16.556804 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.556813 | orchestrator | 2026-03-16 00:57:16.556822 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-16 00:57:16.556831 | orchestrator | Monday 16 March 2026 00:50:58 +0000 (0:00:00.717) 0:04:44.417 ********** 2026-03-16 00:57:16.556840 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.556848 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.556854 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.556859 | orchestrator | 2026-03-16 00:57:16.556864 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-16 00:57:16.556870 | orchestrator | Monday 16 March 2026 00:50:58 +0000 (0:00:00.503) 0:04:44.920 ********** 2026-03-16 00:57:16.556875 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.556880 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.556886 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.556891 | orchestrator | 2026-03-16 00:57:16.556896 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-16 00:57:16.556902 | orchestrator | Monday 16 March 2026 00:50:58 +0000 (0:00:00.310) 0:04:45.230 ********** 2026-03-16 00:57:16.556907 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-16 00:57:16.556913 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-16 00:57:16.556918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-16 00:57:16.556923 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.556934 | orchestrator | 2026-03-16 00:57:16.556940 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-16 00:57:16.556945 | orchestrator | Monday 16 March 2026 00:50:59 +0000 (0:00:00.843) 0:04:46.074 ********** 2026-03-16 00:57:16.556951 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.556971 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.556977 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.556982 | orchestrator | 2026-03-16 00:57:16.556987 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-16 00:57:16.556993 | orchestrator | 2026-03-16 00:57:16.556998 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-16 00:57:16.557004 | orchestrator | Monday 16 March 2026 00:51:00 +0000 (0:00:00.505) 0:04:46.580 ********** 2026-03-16 00:57:16.557009 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.557015 | orchestrator | 2026-03-16 00:57:16.557020 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-16 00:57:16.557025 | orchestrator | Monday 16 March 2026 00:51:00 +0000 (0:00:00.489) 0:04:47.069 ********** 2026-03-16 00:57:16.557029 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.557034 | orchestrator | 2026-03-16 00:57:16.557039 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-16 00:57:16.557044 | orchestrator | Monday 16 March 2026 00:51:01 +0000 (0:00:00.709) 0:04:47.778 ********** 2026-03-16 00:57:16.557048 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.557053 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.557062 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.557067 | orchestrator | 2026-03-16 00:57:16.557072 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-16 00:57:16.557076 | orchestrator | Monday 16 March 2026 00:51:02 +0000 (0:00:00.794) 0:04:48.573 ********** 2026-03-16 00:57:16.557081 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557086 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557090 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557095 | orchestrator | 2026-03-16 00:57:16.557100 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-16 00:57:16.557105 | orchestrator | Monday 16 March 2026 00:51:02 +0000 (0:00:00.274) 0:04:48.847 ********** 2026-03-16 00:57:16.557109 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557114 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557119 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557123 | orchestrator | 2026-03-16 00:57:16.557128 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-16 00:57:16.557133 | orchestrator | Monday 16 March 2026 00:51:02 +0000 (0:00:00.453) 0:04:49.301 ********** 2026-03-16 00:57:16.557138 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557143 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557147 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557152 | orchestrator | 2026-03-16 00:57:16.557159 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-16 00:57:16.557168 | orchestrator | Monday 16 March 2026 00:51:03 +0000 (0:00:00.289) 0:04:49.590 ********** 2026-03-16 00:57:16.557174 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.557178 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.557183 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.557188 | orchestrator | 2026-03-16 00:57:16.557193 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-16 00:57:16.557198 | orchestrator | Monday 16 March 2026 00:51:03 +0000 (0:00:00.672) 0:04:50.263 ********** 2026-03-16 00:57:16.557205 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557212 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557220 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557236 | orchestrator | 2026-03-16 00:57:16.557244 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-16 00:57:16.557251 | orchestrator | Monday 16 March 2026 00:51:04 +0000 (0:00:00.279) 0:04:50.542 ********** 2026-03-16 00:57:16.557259 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557266 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557273 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557280 | orchestrator | 2026-03-16 00:57:16.557287 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-16 00:57:16.557294 | orchestrator | Monday 16 March 2026 00:51:04 +0000 (0:00:00.483) 0:04:51.025 ********** 2026-03-16 00:57:16.557302 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.557308 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.557315 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.557322 | orchestrator | 2026-03-16 00:57:16.557330 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-16 00:57:16.557337 | orchestrator | Monday 16 March 2026 00:51:05 +0000 (0:00:00.759) 0:04:51.784 ********** 2026-03-16 00:57:16.557345 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.557352 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.557360 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.557368 | orchestrator | 2026-03-16 00:57:16.557375 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-16 00:57:16.557383 | orchestrator | Monday 16 March 2026 00:51:06 +0000 (0:00:00.727) 0:04:52.512 ********** 2026-03-16 00:57:16.557391 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557399 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557407 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557412 | orchestrator | 2026-03-16 00:57:16.557417 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-16 00:57:16.557422 | orchestrator | Monday 16 March 2026 00:51:06 +0000 (0:00:00.255) 0:04:52.767 ********** 2026-03-16 00:57:16.557427 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.557431 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.557436 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.557441 | orchestrator | 2026-03-16 00:57:16.557465 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-16 00:57:16.557470 | orchestrator | Monday 16 March 2026 00:51:06 +0000 (0:00:00.571) 0:04:53.339 ********** 2026-03-16 00:57:16.557475 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557480 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557485 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557489 | orchestrator | 2026-03-16 00:57:16.557494 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-16 00:57:16.557515 | orchestrator | Monday 16 March 2026 00:51:07 +0000 (0:00:00.321) 0:04:53.660 ********** 2026-03-16 00:57:16.557521 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557525 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557530 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557535 | orchestrator | 2026-03-16 00:57:16.557539 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-16 00:57:16.557544 | orchestrator | Monday 16 March 2026 00:51:07 +0000 (0:00:00.337) 0:04:53.997 ********** 2026-03-16 00:57:16.557549 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557554 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557558 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557563 | orchestrator | 2026-03-16 00:57:16.557568 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-16 00:57:16.557573 | orchestrator | Monday 16 March 2026 00:51:07 +0000 (0:00:00.330) 0:04:54.327 ********** 2026-03-16 00:57:16.557578 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557582 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557587 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557592 | orchestrator | 2026-03-16 00:57:16.557597 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-16 00:57:16.557606 | orchestrator | Monday 16 March 2026 00:51:08 +0000 (0:00:00.317) 0:04:54.645 ********** 2026-03-16 00:57:16.557611 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557616 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557625 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557630 | orchestrator | 2026-03-16 00:57:16.557635 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-16 00:57:16.557639 | orchestrator | Monday 16 March 2026 00:51:08 +0000 (0:00:00.637) 0:04:55.283 ********** 2026-03-16 00:57:16.557644 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.557649 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.557654 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.557658 | orchestrator | 2026-03-16 00:57:16.557663 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-16 00:57:16.557668 | orchestrator | Monday 16 March 2026 00:51:09 +0000 (0:00:00.344) 0:04:55.627 ********** 2026-03-16 00:57:16.557673 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.557678 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.557682 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.557687 | orchestrator | 2026-03-16 00:57:16.557692 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-16 00:57:16.557697 | orchestrator | Monday 16 March 2026 00:51:09 +0000 (0:00:00.311) 0:04:55.939 ********** 2026-03-16 00:57:16.557702 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.557706 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.557711 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.557716 | orchestrator | 2026-03-16 00:57:16.557721 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-16 00:57:16.557725 | orchestrator | Monday 16 March 2026 00:51:10 +0000 (0:00:00.626) 0:04:56.565 ********** 2026-03-16 00:57:16.557730 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-16 00:57:16.557735 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-16 00:57:16.557740 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-16 00:57:16.557745 | orchestrator | 2026-03-16 00:57:16.557750 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-16 00:57:16.557755 | orchestrator | Monday 16 March 2026 00:51:10 +0000 (0:00:00.586) 0:04:57.151 ********** 2026-03-16 00:57:16.557759 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.557764 | orchestrator | 2026-03-16 00:57:16.557769 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-16 00:57:16.557774 | orchestrator | Monday 16 March 2026 00:51:11 +0000 (0:00:00.500) 0:04:57.652 ********** 2026-03-16 00:57:16.557779 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.557783 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.557788 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.557793 | orchestrator | 2026-03-16 00:57:16.557798 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-16 00:57:16.557803 | orchestrator | Monday 16 March 2026 00:51:11 +0000 (0:00:00.687) 0:04:58.339 ********** 2026-03-16 00:57:16.557807 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.557812 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.557817 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.557822 | orchestrator | 2026-03-16 00:57:16.557826 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-16 00:57:16.557831 | orchestrator | Monday 16 March 2026 00:51:12 +0000 (0:00:00.631) 0:04:58.970 ********** 2026-03-16 00:57:16.557836 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-16 00:57:16.557841 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-16 00:57:16.557846 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-16 00:57:16.557851 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-16 00:57:16.557859 | orchestrator | 2026-03-16 00:57:16.557864 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-16 00:57:16.557869 | orchestrator | Monday 16 March 2026 00:51:23 +0000 (0:00:10.834) 0:05:09.804 ********** 2026-03-16 00:57:16.557874 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.557878 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.557883 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.557888 | orchestrator | 2026-03-16 00:57:16.557893 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-16 00:57:16.557897 | orchestrator | Monday 16 March 2026 00:51:23 +0000 (0:00:00.405) 0:05:10.210 ********** 2026-03-16 00:57:16.557902 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-16 00:57:16.557907 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-16 00:57:16.557912 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-16 00:57:16.557917 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-16 00:57:16.557921 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.557937 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.557943 | orchestrator | 2026-03-16 00:57:16.557948 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-16 00:57:16.557952 | orchestrator | Monday 16 March 2026 00:51:26 +0000 (0:00:02.569) 0:05:12.779 ********** 2026-03-16 00:57:16.557957 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-16 00:57:16.557962 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-16 00:57:16.557967 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-16 00:57:16.557972 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-16 00:57:16.557976 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-16 00:57:16.557981 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-16 00:57:16.557989 | orchestrator | 2026-03-16 00:57:16.557997 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-16 00:57:16.558005 | orchestrator | Monday 16 March 2026 00:51:27 +0000 (0:00:01.191) 0:05:13.970 ********** 2026-03-16 00:57:16.558044 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.558052 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.558057 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.558062 | orchestrator | 2026-03-16 00:57:16.558067 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-16 00:57:16.558075 | orchestrator | Monday 16 March 2026 00:51:28 +0000 (0:00:01.035) 0:05:15.006 ********** 2026-03-16 00:57:16.558080 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.558084 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.558089 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.558094 | orchestrator | 2026-03-16 00:57:16.558099 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-16 00:57:16.558104 | orchestrator | Monday 16 March 2026 00:51:28 +0000 (0:00:00.362) 0:05:15.369 ********** 2026-03-16 00:57:16.558108 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.558113 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.558118 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.558123 | orchestrator | 2026-03-16 00:57:16.558127 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-16 00:57:16.558132 | orchestrator | Monday 16 March 2026 00:51:29 +0000 (0:00:00.322) 0:05:15.692 ********** 2026-03-16 00:57:16.558137 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.558142 | orchestrator | 2026-03-16 00:57:16.558147 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-16 00:57:16.558152 | orchestrator | Monday 16 March 2026 00:51:30 +0000 (0:00:00.812) 0:05:16.504 ********** 2026-03-16 00:57:16.558156 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.558161 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.558170 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.558175 | orchestrator | 2026-03-16 00:57:16.558180 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-16 00:57:16.558185 | orchestrator | Monday 16 March 2026 00:51:30 +0000 (0:00:00.436) 0:05:16.941 ********** 2026-03-16 00:57:16.558189 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.558194 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.558199 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.558204 | orchestrator | 2026-03-16 00:57:16.558208 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-16 00:57:16.558213 | orchestrator | Monday 16 March 2026 00:51:30 +0000 (0:00:00.365) 0:05:17.306 ********** 2026-03-16 00:57:16.558218 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.558223 | orchestrator | 2026-03-16 00:57:16.558227 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-16 00:57:16.558232 | orchestrator | Monday 16 March 2026 00:51:31 +0000 (0:00:00.881) 0:05:18.188 ********** 2026-03-16 00:57:16.558237 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.558242 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.558247 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.558251 | orchestrator | 2026-03-16 00:57:16.558256 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-16 00:57:16.558261 | orchestrator | Monday 16 March 2026 00:51:33 +0000 (0:00:01.271) 0:05:19.460 ********** 2026-03-16 00:57:16.558266 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.558270 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.558275 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.558280 | orchestrator | 2026-03-16 00:57:16.558284 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-16 00:57:16.558289 | orchestrator | Monday 16 March 2026 00:51:34 +0000 (0:00:01.136) 0:05:20.597 ********** 2026-03-16 00:57:16.558294 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.558299 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.558303 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.558308 | orchestrator | 2026-03-16 00:57:16.558313 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-16 00:57:16.558318 | orchestrator | Monday 16 March 2026 00:51:37 +0000 (0:00:02.813) 0:05:23.411 ********** 2026-03-16 00:57:16.558323 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.558327 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.558332 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.558337 | orchestrator | 2026-03-16 00:57:16.558341 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-16 00:57:16.558346 | orchestrator | Monday 16 March 2026 00:51:39 +0000 (0:00:02.272) 0:05:25.683 ********** 2026-03-16 00:57:16.558351 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.558356 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.558360 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-16 00:57:16.558365 | orchestrator | 2026-03-16 00:57:16.558370 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-16 00:57:16.558375 | orchestrator | Monday 16 March 2026 00:51:39 +0000 (0:00:00.428) 0:05:26.111 ********** 2026-03-16 00:57:16.558391 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-16 00:57:16.558397 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-16 00:57:16.558402 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-16 00:57:16.558406 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-16 00:57:16.558411 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-16 00:57:16.558419 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-16 00:57:16.558424 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-16 00:57:16.558429 | orchestrator | 2026-03-16 00:57:16.558434 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-16 00:57:16.558439 | orchestrator | Monday 16 March 2026 00:52:16 +0000 (0:00:36.416) 0:06:02.527 ********** 2026-03-16 00:57:16.558457 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-16 00:57:16.558467 | orchestrator | 2026-03-16 00:57:16.558478 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-16 00:57:16.558487 | orchestrator | Monday 16 March 2026 00:52:17 +0000 (0:00:01.367) 0:06:03.895 ********** 2026-03-16 00:57:16.558493 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.558498 | orchestrator | 2026-03-16 00:57:16.558502 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-16 00:57:16.558507 | orchestrator | Monday 16 March 2026 00:52:17 +0000 (0:00:00.317) 0:06:04.212 ********** 2026-03-16 00:57:16.558512 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.558517 | orchestrator | 2026-03-16 00:57:16.558521 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-16 00:57:16.558526 | orchestrator | Monday 16 March 2026 00:52:17 +0000 (0:00:00.152) 0:06:04.365 ********** 2026-03-16 00:57:16.558531 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-16 00:57:16.558536 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-16 00:57:16.558540 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-16 00:57:16.558545 | orchestrator | 2026-03-16 00:57:16.558550 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-16 00:57:16.558554 | orchestrator | Monday 16 March 2026 00:52:24 +0000 (0:00:06.643) 0:06:11.009 ********** 2026-03-16 00:57:16.558559 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-16 00:57:16.558564 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-16 00:57:16.558569 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-16 00:57:16.558574 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-16 00:57:16.558578 | orchestrator | 2026-03-16 00:57:16.558583 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-16 00:57:16.558588 | orchestrator | Monday 16 March 2026 00:52:30 +0000 (0:00:05.587) 0:06:16.596 ********** 2026-03-16 00:57:16.558594 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.558602 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.558614 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.558622 | orchestrator | 2026-03-16 00:57:16.558629 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-16 00:57:16.558636 | orchestrator | Monday 16 March 2026 00:52:31 +0000 (0:00:00.840) 0:06:17.436 ********** 2026-03-16 00:57:16.558644 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.558652 | orchestrator | 2026-03-16 00:57:16.558659 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-16 00:57:16.558666 | orchestrator | Monday 16 March 2026 00:52:31 +0000 (0:00:00.864) 0:06:18.300 ********** 2026-03-16 00:57:16.558673 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.558681 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.558688 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.558696 | orchestrator | 2026-03-16 00:57:16.558705 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-16 00:57:16.558710 | orchestrator | Monday 16 March 2026 00:52:32 +0000 (0:00:00.340) 0:06:18.641 ********** 2026-03-16 00:57:16.558715 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.558725 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.558730 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.558735 | orchestrator | 2026-03-16 00:57:16.558739 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-16 00:57:16.558744 | orchestrator | Monday 16 March 2026 00:52:33 +0000 (0:00:01.228) 0:06:19.870 ********** 2026-03-16 00:57:16.558749 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-16 00:57:16.558754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-16 00:57:16.558759 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-16 00:57:16.558763 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.558768 | orchestrator | 2026-03-16 00:57:16.558773 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-16 00:57:16.558778 | orchestrator | Monday 16 March 2026 00:52:34 +0000 (0:00:00.935) 0:06:20.805 ********** 2026-03-16 00:57:16.558783 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.558787 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.558792 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.558797 | orchestrator | 2026-03-16 00:57:16.558802 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-16 00:57:16.558807 | orchestrator | 2026-03-16 00:57:16.558811 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-16 00:57:16.558829 | orchestrator | Monday 16 March 2026 00:52:35 +0000 (0:00:00.976) 0:06:21.782 ********** 2026-03-16 00:57:16.558835 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.558840 | orchestrator | 2026-03-16 00:57:16.558845 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-16 00:57:16.558850 | orchestrator | Monday 16 March 2026 00:52:35 +0000 (0:00:00.521) 0:06:22.304 ********** 2026-03-16 00:57:16.558854 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.558859 | orchestrator | 2026-03-16 00:57:16.558864 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-16 00:57:16.558869 | orchestrator | Monday 16 March 2026 00:52:36 +0000 (0:00:00.809) 0:06:23.113 ********** 2026-03-16 00:57:16.558874 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.558879 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.558883 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.558888 | orchestrator | 2026-03-16 00:57:16.558893 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-16 00:57:16.558898 | orchestrator | Monday 16 March 2026 00:52:37 +0000 (0:00:00.315) 0:06:23.429 ********** 2026-03-16 00:57:16.558903 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.558908 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.558913 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.558917 | orchestrator | 2026-03-16 00:57:16.558922 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-16 00:57:16.558927 | orchestrator | Monday 16 March 2026 00:52:37 +0000 (0:00:00.738) 0:06:24.167 ********** 2026-03-16 00:57:16.558932 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.558937 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.558942 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.558947 | orchestrator | 2026-03-16 00:57:16.558952 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-16 00:57:16.558957 | orchestrator | Monday 16 March 2026 00:52:38 +0000 (0:00:00.764) 0:06:24.932 ********** 2026-03-16 00:57:16.558961 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.558966 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.558971 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.558976 | orchestrator | 2026-03-16 00:57:16.558980 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-16 00:57:16.558985 | orchestrator | Monday 16 March 2026 00:52:39 +0000 (0:00:01.135) 0:06:26.067 ********** 2026-03-16 00:57:16.558994 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.558998 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.559003 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.559008 | orchestrator | 2026-03-16 00:57:16.559013 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-16 00:57:16.559018 | orchestrator | Monday 16 March 2026 00:52:40 +0000 (0:00:00.320) 0:06:26.388 ********** 2026-03-16 00:57:16.559022 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.559027 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.559032 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.559037 | orchestrator | 2026-03-16 00:57:16.559041 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-16 00:57:16.559046 | orchestrator | Monday 16 March 2026 00:52:40 +0000 (0:00:00.344) 0:06:26.733 ********** 2026-03-16 00:57:16.559051 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.559056 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.559061 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.559065 | orchestrator | 2026-03-16 00:57:16.559070 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-16 00:57:16.559075 | orchestrator | Monday 16 March 2026 00:52:40 +0000 (0:00:00.325) 0:06:27.058 ********** 2026-03-16 00:57:16.559148 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.559170 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.559175 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.559180 | orchestrator | 2026-03-16 00:57:16.559185 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-16 00:57:16.559190 | orchestrator | Monday 16 March 2026 00:52:41 +0000 (0:00:01.234) 0:06:28.292 ********** 2026-03-16 00:57:16.559194 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.559199 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.559204 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.559209 | orchestrator | 2026-03-16 00:57:16.559213 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-16 00:57:16.559218 | orchestrator | Monday 16 March 2026 00:52:42 +0000 (0:00:00.866) 0:06:29.158 ********** 2026-03-16 00:57:16.559223 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.559228 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.559233 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.559237 | orchestrator | 2026-03-16 00:57:16.559242 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-16 00:57:16.559247 | orchestrator | Monday 16 March 2026 00:52:43 +0000 (0:00:00.336) 0:06:29.495 ********** 2026-03-16 00:57:16.559252 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.559257 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.559261 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.559266 | orchestrator | 2026-03-16 00:57:16.559271 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-16 00:57:16.559276 | orchestrator | Monday 16 March 2026 00:52:43 +0000 (0:00:00.313) 0:06:29.809 ********** 2026-03-16 00:57:16.559283 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.559291 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.559298 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.559306 | orchestrator | 2026-03-16 00:57:16.559313 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-16 00:57:16.559321 | orchestrator | Monday 16 March 2026 00:52:44 +0000 (0:00:00.658) 0:06:30.467 ********** 2026-03-16 00:57:16.559327 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.559334 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.559342 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.559349 | orchestrator | 2026-03-16 00:57:16.559357 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-16 00:57:16.559379 | orchestrator | Monday 16 March 2026 00:52:44 +0000 (0:00:00.438) 0:06:30.905 ********** 2026-03-16 00:57:16.559388 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.559403 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.559408 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.559413 | orchestrator | 2026-03-16 00:57:16.559417 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-16 00:57:16.559422 | orchestrator | Monday 16 March 2026 00:52:44 +0000 (0:00:00.346) 0:06:31.252 ********** 2026-03-16 00:57:16.559427 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.559432 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.559437 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.559441 | orchestrator | 2026-03-16 00:57:16.559487 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-16 00:57:16.559493 | orchestrator | Monday 16 March 2026 00:52:45 +0000 (0:00:00.302) 0:06:31.555 ********** 2026-03-16 00:57:16.559497 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.559502 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.559507 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.559512 | orchestrator | 2026-03-16 00:57:16.559517 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-16 00:57:16.559522 | orchestrator | Monday 16 March 2026 00:52:45 +0000 (0:00:00.620) 0:06:32.176 ********** 2026-03-16 00:57:16.559527 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.559532 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.559540 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.559545 | orchestrator | 2026-03-16 00:57:16.559550 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-16 00:57:16.559555 | orchestrator | Monday 16 March 2026 00:52:46 +0000 (0:00:00.343) 0:06:32.519 ********** 2026-03-16 00:57:16.559559 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.559564 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.559569 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.559574 | orchestrator | 2026-03-16 00:57:16.559578 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-16 00:57:16.559583 | orchestrator | Monday 16 March 2026 00:52:46 +0000 (0:00:00.322) 0:06:32.842 ********** 2026-03-16 00:57:16.559588 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.559593 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.559597 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.559602 | orchestrator | 2026-03-16 00:57:16.559607 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-16 00:57:16.559612 | orchestrator | Monday 16 March 2026 00:52:47 +0000 (0:00:00.835) 0:06:33.678 ********** 2026-03-16 00:57:16.559616 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.559621 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.559626 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.559631 | orchestrator | 2026-03-16 00:57:16.559635 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-16 00:57:16.559640 | orchestrator | Monday 16 March 2026 00:52:47 +0000 (0:00:00.392) 0:06:34.070 ********** 2026-03-16 00:57:16.559645 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-16 00:57:16.559650 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-16 00:57:16.559655 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-16 00:57:16.559659 | orchestrator | 2026-03-16 00:57:16.559664 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-16 00:57:16.559669 | orchestrator | Monday 16 March 2026 00:52:48 +0000 (0:00:00.628) 0:06:34.698 ********** 2026-03-16 00:57:16.559674 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-03-16 00:57:16.559679 | orchestrator | 2026-03-16 00:57:16.559684 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-16 00:57:16.559688 | orchestrator | Monday 16 March 2026 00:52:48 +0000 (0:00:00.603) 0:06:35.301 ********** 2026-03-16 00:57:16.559693 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.559703 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.559708 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.559713 | orchestrator | 2026-03-16 00:57:16.559717 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-16 00:57:16.559722 | orchestrator | Monday 16 March 2026 00:52:49 +0000 (0:00:00.613) 0:06:35.915 ********** 2026-03-16 00:57:16.559727 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.559732 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.559737 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.559741 | orchestrator | 2026-03-16 00:57:16.559746 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-16 00:57:16.559751 | orchestrator | Monday 16 March 2026 00:52:49 +0000 (0:00:00.321) 0:06:36.236 ********** 2026-03-16 00:57:16.559756 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.559761 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.559765 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.559770 | orchestrator | 2026-03-16 00:57:16.559774 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-16 00:57:16.559779 | orchestrator | Monday 16 March 2026 00:52:50 +0000 (0:00:00.709) 0:06:36.945 ********** 2026-03-16 00:57:16.559783 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.559788 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.559792 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.559797 | orchestrator | 2026-03-16 00:57:16.559801 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-16 00:57:16.559806 | orchestrator | Monday 16 March 2026 00:52:50 +0000 (0:00:00.333) 0:06:37.279 ********** 2026-03-16 00:57:16.559810 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-16 00:57:16.559815 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-16 00:57:16.559820 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-16 00:57:16.559828 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-16 00:57:16.559833 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-16 00:57:16.559837 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-16 00:57:16.559842 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-16 00:57:16.559846 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-16 00:57:16.559851 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-16 00:57:16.559855 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-16 00:57:16.559860 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-16 00:57:16.559864 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-16 00:57:16.559869 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-16 00:57:16.559873 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-16 00:57:16.559880 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-16 00:57:16.559885 | orchestrator | 2026-03-16 00:57:16.559890 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-16 00:57:16.559894 | orchestrator | Monday 16 March 2026 00:52:56 +0000 (0:00:05.637) 0:06:42.916 ********** 2026-03-16 00:57:16.559899 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.559903 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.559908 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.559912 | orchestrator | 2026-03-16 00:57:16.559917 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-16 00:57:16.559928 | orchestrator | Monday 16 March 2026 00:52:56 +0000 (0:00:00.342) 0:06:43.259 ********** 2026-03-16 00:57:16.559935 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.559942 | orchestrator | 2026-03-16 00:57:16.559948 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-16 00:57:16.559955 | orchestrator | Monday 16 March 2026 00:52:57 +0000 (0:00:00.515) 0:06:43.774 ********** 2026-03-16 00:57:16.559962 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-16 00:57:16.559969 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-16 00:57:16.559976 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-16 00:57:16.559983 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-16 00:57:16.559990 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-16 00:57:16.559998 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-16 00:57:16.560005 | orchestrator | 2026-03-16 00:57:16.560012 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-16 00:57:16.560019 | orchestrator | Monday 16 March 2026 00:52:58 +0000 (0:00:01.549) 0:06:45.324 ********** 2026-03-16 00:57:16.560026 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.560032 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-16 00:57:16.560037 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-16 00:57:16.560042 | orchestrator | 2026-03-16 00:57:16.560046 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-16 00:57:16.560051 | orchestrator | Monday 16 March 2026 00:53:01 +0000 (0:00:02.290) 0:06:47.615 ********** 2026-03-16 00:57:16.560055 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-16 00:57:16.560060 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-16 00:57:16.560064 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.560069 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-16 00:57:16.560074 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-16 00:57:16.560078 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-16 00:57:16.560083 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-16 00:57:16.560087 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.560092 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.560096 | orchestrator | 2026-03-16 00:57:16.560101 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-16 00:57:16.560105 | orchestrator | Monday 16 March 2026 00:53:02 +0000 (0:00:01.208) 0:06:48.824 ********** 2026-03-16 00:57:16.560110 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-16 00:57:16.560114 | orchestrator | 2026-03-16 00:57:16.560119 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-16 00:57:16.560123 | orchestrator | Monday 16 March 2026 00:53:04 +0000 (0:00:02.374) 0:06:51.198 ********** 2026-03-16 00:57:16.560128 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.560132 | orchestrator | 2026-03-16 00:57:16.560137 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-16 00:57:16.560141 | orchestrator | Monday 16 March 2026 00:53:05 +0000 (0:00:00.983) 0:06:52.182 ********** 2026-03-16 00:57:16.560146 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-20eacd0a-f744-531e-8511-c5afb936ef86', 'data_vg': 'ceph-20eacd0a-f744-531e-8511-c5afb936ef86'}) 2026-03-16 00:57:16.560159 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-71e0430a-6bf1-53ec-905e-7c884e89f784', 'data_vg': 'ceph-71e0430a-6bf1-53ec-905e-7c884e89f784'}) 2026-03-16 00:57:16.560164 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ded6401a-969b-5c16-b1be-1b69fe43ded8', 'data_vg': 'ceph-ded6401a-969b-5c16-b1be-1b69fe43ded8'}) 2026-03-16 00:57:16.560173 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-01ad088d-533b-5bd8-92eb-284afc0ad32d', 'data_vg': 'ceph-01ad088d-533b-5bd8-92eb-284afc0ad32d'}) 2026-03-16 00:57:16.560178 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-40b418b1-0bd6-568c-82b5-8ddc4abd3365', 'data_vg': 'ceph-40b418b1-0bd6-568c-82b5-8ddc4abd3365'}) 2026-03-16 00:57:16.560182 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c2695ca6-70a1-5c1a-b7de-886954e6bf07', 'data_vg': 'ceph-c2695ca6-70a1-5c1a-b7de-886954e6bf07'}) 2026-03-16 00:57:16.560187 | orchestrator | 2026-03-16 00:57:16.560191 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-16 00:57:16.560196 | orchestrator | Monday 16 March 2026 00:53:45 +0000 (0:00:40.037) 0:07:32.220 ********** 2026-03-16 00:57:16.560200 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560205 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.560209 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.560214 | orchestrator | 2026-03-16 00:57:16.560221 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-16 00:57:16.560226 | orchestrator | Monday 16 March 2026 00:53:46 +0000 (0:00:00.397) 0:07:32.617 ********** 2026-03-16 00:57:16.560231 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.560235 | orchestrator | 2026-03-16 00:57:16.560240 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-16 00:57:16.560244 | orchestrator | Monday 16 March 2026 00:53:47 +0000 (0:00:00.906) 0:07:33.524 ********** 2026-03-16 00:57:16.560249 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.560253 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.560258 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.560262 | orchestrator | 2026-03-16 00:57:16.560267 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-16 00:57:16.560271 | orchestrator | Monday 16 March 2026 00:53:47 +0000 (0:00:00.679) 0:07:34.204 ********** 2026-03-16 00:57:16.560276 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.560280 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.560285 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.560289 | orchestrator | 2026-03-16 00:57:16.560294 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-16 00:57:16.560298 | orchestrator | Monday 16 March 2026 00:53:50 +0000 (0:00:02.745) 0:07:36.950 ********** 2026-03-16 00:57:16.560303 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.560307 | orchestrator | 2026-03-16 00:57:16.560312 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-16 00:57:16.560316 | orchestrator | Monday 16 March 2026 00:53:51 +0000 (0:00:00.860) 0:07:37.810 ********** 2026-03-16 00:57:16.560321 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.560326 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.560330 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.560335 | orchestrator | 2026-03-16 00:57:16.560339 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-16 00:57:16.560344 | orchestrator | Monday 16 March 2026 00:53:52 +0000 (0:00:01.221) 0:07:39.031 ********** 2026-03-16 00:57:16.560348 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.560353 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.560357 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.560362 | orchestrator | 2026-03-16 00:57:16.560366 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-16 00:57:16.560371 | orchestrator | Monday 16 March 2026 00:53:53 +0000 (0:00:01.202) 0:07:40.234 ********** 2026-03-16 00:57:16.560376 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.560380 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.560384 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.560394 | orchestrator | 2026-03-16 00:57:16.560399 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-16 00:57:16.560404 | orchestrator | Monday 16 March 2026 00:53:55 +0000 (0:00:01.998) 0:07:42.232 ********** 2026-03-16 00:57:16.560408 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560413 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.560417 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.560422 | orchestrator | 2026-03-16 00:57:16.560426 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-16 00:57:16.560431 | orchestrator | Monday 16 March 2026 00:53:56 +0000 (0:00:00.673) 0:07:42.906 ********** 2026-03-16 00:57:16.560435 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560440 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.560461 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.560466 | orchestrator | 2026-03-16 00:57:16.560471 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-16 00:57:16.560476 | orchestrator | Monday 16 March 2026 00:53:56 +0000 (0:00:00.365) 0:07:43.271 ********** 2026-03-16 00:57:16.560480 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-16 00:57:16.560485 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-16 00:57:16.560489 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-16 00:57:16.560494 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-16 00:57:16.560498 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-16 00:57:16.560502 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-16 00:57:16.560507 | orchestrator | 2026-03-16 00:57:16.560512 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-16 00:57:16.560516 | orchestrator | Monday 16 March 2026 00:53:57 +0000 (0:00:01.096) 0:07:44.368 ********** 2026-03-16 00:57:16.560521 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-16 00:57:16.560525 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-16 00:57:16.560533 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-16 00:57:16.560538 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-16 00:57:16.560543 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-16 00:57:16.560547 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-16 00:57:16.560552 | orchestrator | 2026-03-16 00:57:16.560556 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-16 00:57:16.560561 | orchestrator | Monday 16 March 2026 00:54:00 +0000 (0:00:02.325) 0:07:46.693 ********** 2026-03-16 00:57:16.560566 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-16 00:57:16.560570 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-16 00:57:16.560575 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-16 00:57:16.560579 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-16 00:57:16.560583 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-16 00:57:16.560588 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-16 00:57:16.560593 | orchestrator | 2026-03-16 00:57:16.560597 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-16 00:57:16.560602 | orchestrator | Monday 16 March 2026 00:54:04 +0000 (0:00:04.213) 0:07:50.906 ********** 2026-03-16 00:57:16.560606 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560611 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.560615 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-16 00:57:16.560620 | orchestrator | 2026-03-16 00:57:16.560627 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-16 00:57:16.560632 | orchestrator | Monday 16 March 2026 00:54:07 +0000 (0:00:03.019) 0:07:53.926 ********** 2026-03-16 00:57:16.560636 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560641 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.560645 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-16 00:57:16.560650 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-16 00:57:16.560658 | orchestrator | 2026-03-16 00:57:16.560663 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-16 00:57:16.560667 | orchestrator | Monday 16 March 2026 00:54:19 +0000 (0:00:12.432) 0:08:06.358 ********** 2026-03-16 00:57:16.560672 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560676 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.560681 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.560685 | orchestrator | 2026-03-16 00:57:16.560690 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-16 00:57:16.560695 | orchestrator | Monday 16 March 2026 00:54:21 +0000 (0:00:01.116) 0:08:07.475 ********** 2026-03-16 00:57:16.560699 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560704 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.560708 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.560713 | orchestrator | 2026-03-16 00:57:16.560717 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-16 00:57:16.560722 | orchestrator | Monday 16 March 2026 00:54:21 +0000 (0:00:00.333) 0:08:07.808 ********** 2026-03-16 00:57:16.560726 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.560731 | orchestrator | 2026-03-16 00:57:16.560736 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-16 00:57:16.560740 | orchestrator | Monday 16 March 2026 00:54:22 +0000 (0:00:00.778) 0:08:08.587 ********** 2026-03-16 00:57:16.560745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.560749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.560754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.560758 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560762 | orchestrator | 2026-03-16 00:57:16.560767 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-16 00:57:16.560771 | orchestrator | Monday 16 March 2026 00:54:22 +0000 (0:00:00.385) 0:08:08.972 ********** 2026-03-16 00:57:16.560776 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560781 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.560786 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.560793 | orchestrator | 2026-03-16 00:57:16.560800 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-16 00:57:16.560807 | orchestrator | Monday 16 March 2026 00:54:22 +0000 (0:00:00.344) 0:08:09.316 ********** 2026-03-16 00:57:16.560816 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560827 | orchestrator | 2026-03-16 00:57:16.560837 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-16 00:57:16.560844 | orchestrator | Monday 16 March 2026 00:54:23 +0000 (0:00:00.242) 0:08:09.559 ********** 2026-03-16 00:57:16.560850 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560857 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.560864 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.560870 | orchestrator | 2026-03-16 00:57:16.560877 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-16 00:57:16.560884 | orchestrator | Monday 16 March 2026 00:54:23 +0000 (0:00:00.352) 0:08:09.911 ********** 2026-03-16 00:57:16.560891 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560897 | orchestrator | 2026-03-16 00:57:16.560904 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-16 00:57:16.560911 | orchestrator | Monday 16 March 2026 00:54:23 +0000 (0:00:00.234) 0:08:10.146 ********** 2026-03-16 00:57:16.560917 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560925 | orchestrator | 2026-03-16 00:57:16.560931 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-16 00:57:16.560938 | orchestrator | Monday 16 March 2026 00:54:24 +0000 (0:00:00.265) 0:08:10.411 ********** 2026-03-16 00:57:16.560946 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560960 | orchestrator | 2026-03-16 00:57:16.560967 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-16 00:57:16.560980 | orchestrator | Monday 16 March 2026 00:54:24 +0000 (0:00:00.156) 0:08:10.567 ********** 2026-03-16 00:57:16.560985 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.560989 | orchestrator | 2026-03-16 00:57:16.560994 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-16 00:57:16.560998 | orchestrator | Monday 16 March 2026 00:54:25 +0000 (0:00:00.905) 0:08:11.473 ********** 2026-03-16 00:57:16.561003 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561007 | orchestrator | 2026-03-16 00:57:16.561012 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-16 00:57:16.561016 | orchestrator | Monday 16 March 2026 00:54:25 +0000 (0:00:00.238) 0:08:11.711 ********** 2026-03-16 00:57:16.561021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.561025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.561030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.561034 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561038 | orchestrator | 2026-03-16 00:57:16.561043 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-16 00:57:16.561048 | orchestrator | Monday 16 March 2026 00:54:25 +0000 (0:00:00.440) 0:08:12.152 ********** 2026-03-16 00:57:16.561052 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561057 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.561061 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.561066 | orchestrator | 2026-03-16 00:57:16.561074 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-16 00:57:16.561078 | orchestrator | Monday 16 March 2026 00:54:26 +0000 (0:00:00.369) 0:08:12.521 ********** 2026-03-16 00:57:16.561083 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561087 | orchestrator | 2026-03-16 00:57:16.561092 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-16 00:57:16.561097 | orchestrator | Monday 16 March 2026 00:54:26 +0000 (0:00:00.224) 0:08:12.745 ********** 2026-03-16 00:57:16.561101 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561106 | orchestrator | 2026-03-16 00:57:16.561110 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-16 00:57:16.561115 | orchestrator | 2026-03-16 00:57:16.561119 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-16 00:57:16.561124 | orchestrator | Monday 16 March 2026 00:54:27 +0000 (0:00:00.963) 0:08:13.708 ********** 2026-03-16 00:57:16.561129 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.561134 | orchestrator | 2026-03-16 00:57:16.561139 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-16 00:57:16.561143 | orchestrator | Monday 16 March 2026 00:54:28 +0000 (0:00:01.214) 0:08:14.923 ********** 2026-03-16 00:57:16.561148 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.561152 | orchestrator | 2026-03-16 00:57:16.561157 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-16 00:57:16.561161 | orchestrator | Monday 16 March 2026 00:54:30 +0000 (0:00:01.792) 0:08:16.716 ********** 2026-03-16 00:57:16.561166 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561171 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.561175 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.561179 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.561184 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.561188 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.561193 | orchestrator | 2026-03-16 00:57:16.561198 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-16 00:57:16.561204 | orchestrator | Monday 16 March 2026 00:54:31 +0000 (0:00:01.199) 0:08:17.915 ********** 2026-03-16 00:57:16.561209 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.561213 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.561218 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.561222 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.561227 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.561231 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.561238 | orchestrator | 2026-03-16 00:57:16.561245 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-16 00:57:16.561251 | orchestrator | Monday 16 March 2026 00:54:32 +0000 (0:00:00.766) 0:08:18.682 ********** 2026-03-16 00:57:16.561258 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.561264 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.561271 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.561278 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.561285 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.561293 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.561304 | orchestrator | 2026-03-16 00:57:16.561312 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-16 00:57:16.561319 | orchestrator | Monday 16 March 2026 00:54:33 +0000 (0:00:01.190) 0:08:19.872 ********** 2026-03-16 00:57:16.561326 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.561333 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.561340 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.561346 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.561354 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.561361 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.561368 | orchestrator | 2026-03-16 00:57:16.561376 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-16 00:57:16.561385 | orchestrator | Monday 16 March 2026 00:54:34 +0000 (0:00:00.757) 0:08:20.630 ********** 2026-03-16 00:57:16.561390 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561394 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.561398 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.561403 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.561407 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.561412 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.561416 | orchestrator | 2026-03-16 00:57:16.561421 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-16 00:57:16.561430 | orchestrator | Monday 16 March 2026 00:54:35 +0000 (0:00:01.363) 0:08:21.993 ********** 2026-03-16 00:57:16.561435 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561439 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.561480 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.561486 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.561491 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.561496 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.561500 | orchestrator | 2026-03-16 00:57:16.561504 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-16 00:57:16.561509 | orchestrator | Monday 16 March 2026 00:54:36 +0000 (0:00:00.772) 0:08:22.766 ********** 2026-03-16 00:57:16.561514 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561518 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.561522 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.561527 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.561531 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.561536 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.561540 | orchestrator | 2026-03-16 00:57:16.561545 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-16 00:57:16.561549 | orchestrator | Monday 16 March 2026 00:54:37 +0000 (0:00:00.868) 0:08:23.635 ********** 2026-03-16 00:57:16.561554 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.561566 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.561571 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.561575 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.561583 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.561588 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.561592 | orchestrator | 2026-03-16 00:57:16.561597 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-16 00:57:16.561602 | orchestrator | Monday 16 March 2026 00:54:38 +0000 (0:00:01.040) 0:08:24.676 ********** 2026-03-16 00:57:16.561606 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.561610 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.561615 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.561619 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.561624 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.561628 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.561633 | orchestrator | 2026-03-16 00:57:16.561637 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-16 00:57:16.561642 | orchestrator | Monday 16 March 2026 00:54:39 +0000 (0:00:01.491) 0:08:26.167 ********** 2026-03-16 00:57:16.561646 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561651 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.561655 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.561660 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.561664 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.561669 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.561673 | orchestrator | 2026-03-16 00:57:16.561678 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-16 00:57:16.561682 | orchestrator | Monday 16 March 2026 00:54:40 +0000 (0:00:00.610) 0:08:26.777 ********** 2026-03-16 00:57:16.561687 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561691 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.561696 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.561700 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.561705 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.561709 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.561714 | orchestrator | 2026-03-16 00:57:16.561718 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-16 00:57:16.561723 | orchestrator | Monday 16 March 2026 00:54:41 +0000 (0:00:00.954) 0:08:27.732 ********** 2026-03-16 00:57:16.561727 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.561732 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.561736 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.561740 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.561745 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.561749 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.561754 | orchestrator | 2026-03-16 00:57:16.561758 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-16 00:57:16.561763 | orchestrator | Monday 16 March 2026 00:54:41 +0000 (0:00:00.631) 0:08:28.363 ********** 2026-03-16 00:57:16.561767 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.561772 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.561776 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.561781 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.561785 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.561790 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.561794 | orchestrator | 2026-03-16 00:57:16.561799 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-16 00:57:16.561803 | orchestrator | Monday 16 March 2026 00:54:42 +0000 (0:00:00.918) 0:08:29.282 ********** 2026-03-16 00:57:16.561808 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.561812 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.561817 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.561821 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.561826 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.561830 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.561838 | orchestrator | 2026-03-16 00:57:16.561843 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-16 00:57:16.561847 | orchestrator | Monday 16 March 2026 00:54:43 +0000 (0:00:00.665) 0:08:29.947 ********** 2026-03-16 00:57:16.561852 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561856 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.561861 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.561865 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.561870 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.561874 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.561879 | orchestrator | 2026-03-16 00:57:16.561883 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-16 00:57:16.561888 | orchestrator | Monday 16 March 2026 00:54:44 +0000 (0:00:00.917) 0:08:30.864 ********** 2026-03-16 00:57:16.561892 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561897 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.561901 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.561906 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:16.561910 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:16.561915 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:16.561919 | orchestrator | 2026-03-16 00:57:16.561927 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-16 00:57:16.561932 | orchestrator | Monday 16 March 2026 00:54:45 +0000 (0:00:00.611) 0:08:31.476 ********** 2026-03-16 00:57:16.561937 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.561941 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.561946 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.561950 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.561955 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.561959 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.561964 | orchestrator | 2026-03-16 00:57:16.561968 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-16 00:57:16.561973 | orchestrator | Monday 16 March 2026 00:54:45 +0000 (0:00:00.891) 0:08:32.368 ********** 2026-03-16 00:57:16.561977 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.561982 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.561986 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.561991 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.561995 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.561999 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.562004 | orchestrator | 2026-03-16 00:57:16.562008 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-16 00:57:16.562055 | orchestrator | Monday 16 March 2026 00:54:46 +0000 (0:00:00.653) 0:08:33.022 ********** 2026-03-16 00:57:16.562060 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562064 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562068 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562072 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.562079 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.562084 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.562088 | orchestrator | 2026-03-16 00:57:16.562092 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-16 00:57:16.562096 | orchestrator | Monday 16 March 2026 00:54:47 +0000 (0:00:01.346) 0:08:34.368 ********** 2026-03-16 00:57:16.562100 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-16 00:57:16.562104 | orchestrator | 2026-03-16 00:57:16.562108 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-16 00:57:16.562112 | orchestrator | Monday 16 March 2026 00:54:52 +0000 (0:00:04.086) 0:08:38.454 ********** 2026-03-16 00:57:16.562117 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-16 00:57:16.562121 | orchestrator | 2026-03-16 00:57:16.562125 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-16 00:57:16.562129 | orchestrator | Monday 16 March 2026 00:54:53 +0000 (0:00:01.893) 0:08:40.348 ********** 2026-03-16 00:57:16.562136 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.562141 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.562145 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.562149 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.562153 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.562157 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.562161 | orchestrator | 2026-03-16 00:57:16.562165 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-16 00:57:16.562169 | orchestrator | Monday 16 March 2026 00:54:55 +0000 (0:00:02.002) 0:08:42.350 ********** 2026-03-16 00:57:16.562173 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.562177 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.562181 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.562185 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.562189 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.562194 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.562198 | orchestrator | 2026-03-16 00:57:16.562202 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-16 00:57:16.562206 | orchestrator | Monday 16 March 2026 00:54:57 +0000 (0:00:01.089) 0:08:43.440 ********** 2026-03-16 00:57:16.562210 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.562216 | orchestrator | 2026-03-16 00:57:16.562220 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-16 00:57:16.562224 | orchestrator | Monday 16 March 2026 00:54:58 +0000 (0:00:01.357) 0:08:44.797 ********** 2026-03-16 00:57:16.562228 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.562232 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.562236 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.562240 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.562244 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.562248 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.562252 | orchestrator | 2026-03-16 00:57:16.562256 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-16 00:57:16.562260 | orchestrator | Monday 16 March 2026 00:55:00 +0000 (0:00:01.919) 0:08:46.717 ********** 2026-03-16 00:57:16.562264 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.562268 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.562272 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.562276 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.562280 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.562284 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.562289 | orchestrator | 2026-03-16 00:57:16.562293 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-16 00:57:16.562297 | orchestrator | Monday 16 March 2026 00:55:04 +0000 (0:00:04.356) 0:08:51.073 ********** 2026-03-16 00:57:16.562301 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:16.562305 | orchestrator | 2026-03-16 00:57:16.562309 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-16 00:57:16.562314 | orchestrator | Monday 16 March 2026 00:55:06 +0000 (0:00:01.455) 0:08:52.529 ********** 2026-03-16 00:57:16.562318 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562322 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562326 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562330 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.562334 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.562338 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.562342 | orchestrator | 2026-03-16 00:57:16.562350 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-16 00:57:16.562354 | orchestrator | Monday 16 March 2026 00:55:07 +0000 (0:00:00.887) 0:08:53.416 ********** 2026-03-16 00:57:16.562361 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.562365 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.562370 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.562374 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:16.562378 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:16.562382 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:16.562386 | orchestrator | 2026-03-16 00:57:16.562390 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-16 00:57:16.562394 | orchestrator | Monday 16 March 2026 00:55:09 +0000 (0:00:02.393) 0:08:55.810 ********** 2026-03-16 00:57:16.562398 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562402 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562406 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562411 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:16.562415 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:16.562419 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:16.562423 | orchestrator | 2026-03-16 00:57:16.562427 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-16 00:57:16.562431 | orchestrator | 2026-03-16 00:57:16.562435 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-16 00:57:16.562439 | orchestrator | Monday 16 March 2026 00:55:10 +0000 (0:00:01.220) 0:08:57.030 ********** 2026-03-16 00:57:16.562458 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.562463 | orchestrator | 2026-03-16 00:57:16.562468 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-16 00:57:16.562472 | orchestrator | Monday 16 March 2026 00:55:11 +0000 (0:00:00.546) 0:08:57.577 ********** 2026-03-16 00:57:16.562476 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.562480 | orchestrator | 2026-03-16 00:57:16.562484 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-16 00:57:16.562488 | orchestrator | Monday 16 March 2026 00:55:12 +0000 (0:00:00.850) 0:08:58.428 ********** 2026-03-16 00:57:16.562492 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.562496 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.562500 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.562504 | orchestrator | 2026-03-16 00:57:16.562508 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-16 00:57:16.562512 | orchestrator | Monday 16 March 2026 00:55:12 +0000 (0:00:00.316) 0:08:58.744 ********** 2026-03-16 00:57:16.562516 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562520 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562524 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562528 | orchestrator | 2026-03-16 00:57:16.562532 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-16 00:57:16.562537 | orchestrator | Monday 16 March 2026 00:55:13 +0000 (0:00:00.743) 0:08:59.487 ********** 2026-03-16 00:57:16.562541 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562545 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562549 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562553 | orchestrator | 2026-03-16 00:57:16.562557 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-16 00:57:16.562561 | orchestrator | Monday 16 March 2026 00:55:14 +0000 (0:00:01.030) 0:09:00.518 ********** 2026-03-16 00:57:16.562565 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562569 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562573 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562577 | orchestrator | 2026-03-16 00:57:16.562581 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-16 00:57:16.562585 | orchestrator | Monday 16 March 2026 00:55:14 +0000 (0:00:00.712) 0:09:01.231 ********** 2026-03-16 00:57:16.562589 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.562593 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.562601 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.562605 | orchestrator | 2026-03-16 00:57:16.562609 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-16 00:57:16.562613 | orchestrator | Monday 16 March 2026 00:55:15 +0000 (0:00:00.331) 0:09:01.562 ********** 2026-03-16 00:57:16.562617 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.562622 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.562626 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.562630 | orchestrator | 2026-03-16 00:57:16.562634 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-16 00:57:16.562638 | orchestrator | Monday 16 March 2026 00:55:15 +0000 (0:00:00.371) 0:09:01.934 ********** 2026-03-16 00:57:16.562642 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.562646 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.562650 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.562654 | orchestrator | 2026-03-16 00:57:16.562658 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-16 00:57:16.562662 | orchestrator | Monday 16 March 2026 00:55:16 +0000 (0:00:00.650) 0:09:02.584 ********** 2026-03-16 00:57:16.562666 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562670 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562674 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562678 | orchestrator | 2026-03-16 00:57:16.562682 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-16 00:57:16.562687 | orchestrator | Monday 16 March 2026 00:55:16 +0000 (0:00:00.674) 0:09:03.259 ********** 2026-03-16 00:57:16.562691 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562695 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562699 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562703 | orchestrator | 2026-03-16 00:57:16.562707 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-16 00:57:16.562711 | orchestrator | Monday 16 March 2026 00:55:17 +0000 (0:00:00.771) 0:09:04.030 ********** 2026-03-16 00:57:16.562715 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.562719 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.562723 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.562727 | orchestrator | 2026-03-16 00:57:16.562734 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-16 00:57:16.562739 | orchestrator | Monday 16 March 2026 00:55:17 +0000 (0:00:00.295) 0:09:04.326 ********** 2026-03-16 00:57:16.562743 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.562747 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.562751 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.562755 | orchestrator | 2026-03-16 00:57:16.562759 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-16 00:57:16.562763 | orchestrator | Monday 16 March 2026 00:55:18 +0000 (0:00:00.586) 0:09:04.912 ********** 2026-03-16 00:57:16.562767 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562771 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562775 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562779 | orchestrator | 2026-03-16 00:57:16.562783 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-16 00:57:16.562788 | orchestrator | Monday 16 March 2026 00:55:18 +0000 (0:00:00.338) 0:09:05.250 ********** 2026-03-16 00:57:16.562792 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562796 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562800 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562804 | orchestrator | 2026-03-16 00:57:16.562808 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-16 00:57:16.562812 | orchestrator | Monday 16 March 2026 00:55:19 +0000 (0:00:00.343) 0:09:05.593 ********** 2026-03-16 00:57:16.562816 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562820 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562826 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562834 | orchestrator | 2026-03-16 00:57:16.562838 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-16 00:57:16.562842 | orchestrator | Monday 16 March 2026 00:55:19 +0000 (0:00:00.334) 0:09:05.928 ********** 2026-03-16 00:57:16.562846 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.562850 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.562854 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.562858 | orchestrator | 2026-03-16 00:57:16.562862 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-16 00:57:16.562866 | orchestrator | Monday 16 March 2026 00:55:20 +0000 (0:00:00.575) 0:09:06.504 ********** 2026-03-16 00:57:16.562871 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.562875 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.562879 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.562883 | orchestrator | 2026-03-16 00:57:16.562887 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-16 00:57:16.562891 | orchestrator | Monday 16 March 2026 00:55:20 +0000 (0:00:00.342) 0:09:06.846 ********** 2026-03-16 00:57:16.562895 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.562899 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.562903 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.562907 | orchestrator | 2026-03-16 00:57:16.562911 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-16 00:57:16.562915 | orchestrator | Monday 16 March 2026 00:55:20 +0000 (0:00:00.276) 0:09:07.122 ********** 2026-03-16 00:57:16.562919 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562923 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562927 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562932 | orchestrator | 2026-03-16 00:57:16.562936 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-16 00:57:16.562940 | orchestrator | Monday 16 March 2026 00:55:21 +0000 (0:00:00.374) 0:09:07.497 ********** 2026-03-16 00:57:16.562944 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.562948 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.562952 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.562956 | orchestrator | 2026-03-16 00:57:16.562960 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-16 00:57:16.562964 | orchestrator | Monday 16 March 2026 00:55:21 +0000 (0:00:00.745) 0:09:08.243 ********** 2026-03-16 00:57:16.562968 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.562972 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.562976 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-16 00:57:16.562980 | orchestrator | 2026-03-16 00:57:16.562984 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-16 00:57:16.562989 | orchestrator | Monday 16 March 2026 00:55:22 +0000 (0:00:00.344) 0:09:08.588 ********** 2026-03-16 00:57:16.562993 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-16 00:57:16.562997 | orchestrator | 2026-03-16 00:57:16.563001 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-16 00:57:16.563005 | orchestrator | Monday 16 March 2026 00:55:24 +0000 (0:00:02.226) 0:09:10.814 ********** 2026-03-16 00:57:16.563011 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-16 00:57:16.563017 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.563021 | orchestrator | 2026-03-16 00:57:16.563025 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-16 00:57:16.563029 | orchestrator | Monday 16 March 2026 00:55:24 +0000 (0:00:00.404) 0:09:11.219 ********** 2026-03-16 00:57:16.563035 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-16 00:57:16.563048 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-16 00:57:16.563053 | orchestrator | 2026-03-16 00:57:16.563060 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-16 00:57:16.563064 | orchestrator | Monday 16 March 2026 00:55:32 +0000 (0:00:07.871) 0:09:19.090 ********** 2026-03-16 00:57:16.563068 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-16 00:57:16.563072 | orchestrator | 2026-03-16 00:57:16.563077 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-16 00:57:16.563081 | orchestrator | Monday 16 March 2026 00:55:36 +0000 (0:00:03.547) 0:09:22.637 ********** 2026-03-16 00:57:16.563085 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.563089 | orchestrator | 2026-03-16 00:57:16.563093 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-16 00:57:16.563097 | orchestrator | Monday 16 March 2026 00:55:36 +0000 (0:00:00.514) 0:09:23.152 ********** 2026-03-16 00:57:16.563101 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-16 00:57:16.563105 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-16 00:57:16.563109 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-16 00:57:16.563113 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-16 00:57:16.563120 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-16 00:57:16.563124 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-16 00:57:16.563128 | orchestrator | 2026-03-16 00:57:16.563132 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-16 00:57:16.563136 | orchestrator | Monday 16 March 2026 00:55:37 +0000 (0:00:00.976) 0:09:24.129 ********** 2026-03-16 00:57:16.563140 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.563144 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-16 00:57:16.563149 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-16 00:57:16.563153 | orchestrator | 2026-03-16 00:57:16.563157 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-16 00:57:16.563161 | orchestrator | Monday 16 March 2026 00:55:40 +0000 (0:00:02.417) 0:09:26.546 ********** 2026-03-16 00:57:16.563165 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-16 00:57:16.563169 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-16 00:57:16.563173 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.563177 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-16 00:57:16.563181 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-16 00:57:16.563185 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.563189 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-16 00:57:16.563194 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-16 00:57:16.563198 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.563202 | orchestrator | 2026-03-16 00:57:16.563206 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-16 00:57:16.563210 | orchestrator | Monday 16 March 2026 00:55:41 +0000 (0:00:01.618) 0:09:28.165 ********** 2026-03-16 00:57:16.563214 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.563218 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.563222 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.563226 | orchestrator | 2026-03-16 00:57:16.563230 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-16 00:57:16.563238 | orchestrator | Monday 16 March 2026 00:55:44 +0000 (0:00:02.435) 0:09:30.600 ********** 2026-03-16 00:57:16.563242 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.563246 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.563250 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.563254 | orchestrator | 2026-03-16 00:57:16.563258 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-16 00:57:16.563262 | orchestrator | Monday 16 March 2026 00:55:44 +0000 (0:00:00.392) 0:09:30.993 ********** 2026-03-16 00:57:16.563266 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.563270 | orchestrator | 2026-03-16 00:57:16.563274 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-16 00:57:16.563279 | orchestrator | Monday 16 March 2026 00:55:45 +0000 (0:00:00.904) 0:09:31.897 ********** 2026-03-16 00:57:16.563283 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.563287 | orchestrator | 2026-03-16 00:57:16.563291 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-16 00:57:16.563295 | orchestrator | Monday 16 March 2026 00:55:46 +0000 (0:00:00.557) 0:09:32.454 ********** 2026-03-16 00:57:16.563299 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.563303 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.563307 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.563311 | orchestrator | 2026-03-16 00:57:16.563315 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-16 00:57:16.563319 | orchestrator | Monday 16 March 2026 00:55:47 +0000 (0:00:01.217) 0:09:33.671 ********** 2026-03-16 00:57:16.563324 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.563328 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.563332 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.563336 | orchestrator | 2026-03-16 00:57:16.563340 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-16 00:57:16.563344 | orchestrator | Monday 16 March 2026 00:55:49 +0000 (0:00:01.750) 0:09:35.422 ********** 2026-03-16 00:57:16.563348 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.563352 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.563356 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.563360 | orchestrator | 2026-03-16 00:57:16.563364 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-16 00:57:16.563371 | orchestrator | Monday 16 March 2026 00:55:51 +0000 (0:00:01.976) 0:09:37.399 ********** 2026-03-16 00:57:16.563375 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.563379 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.563383 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.563388 | orchestrator | 2026-03-16 00:57:16.563392 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-16 00:57:16.563396 | orchestrator | Monday 16 March 2026 00:55:53 +0000 (0:00:02.053) 0:09:39.452 ********** 2026-03-16 00:57:16.563400 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.563404 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.563408 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.563412 | orchestrator | 2026-03-16 00:57:16.563416 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-16 00:57:16.563420 | orchestrator | Monday 16 March 2026 00:55:54 +0000 (0:00:01.743) 0:09:41.196 ********** 2026-03-16 00:57:16.563424 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.563428 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.563432 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.563436 | orchestrator | 2026-03-16 00:57:16.563440 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-16 00:57:16.563461 | orchestrator | Monday 16 March 2026 00:55:55 +0000 (0:00:00.760) 0:09:41.957 ********** 2026-03-16 00:57:16.563468 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.563476 | orchestrator | 2026-03-16 00:57:16.563480 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-16 00:57:16.563484 | orchestrator | Monday 16 March 2026 00:55:56 +0000 (0:00:00.935) 0:09:42.892 ********** 2026-03-16 00:57:16.563488 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.563492 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.563496 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.563500 | orchestrator | 2026-03-16 00:57:16.563504 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-16 00:57:16.563509 | orchestrator | Monday 16 March 2026 00:55:56 +0000 (0:00:00.366) 0:09:43.259 ********** 2026-03-16 00:57:16.563513 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.563517 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.563521 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.563525 | orchestrator | 2026-03-16 00:57:16.563529 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-16 00:57:16.563533 | orchestrator | Monday 16 March 2026 00:55:58 +0000 (0:00:01.278) 0:09:44.537 ********** 2026-03-16 00:57:16.563537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.563541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.563545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.563549 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.563553 | orchestrator | 2026-03-16 00:57:16.563557 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-16 00:57:16.563561 | orchestrator | Monday 16 March 2026 00:55:59 +0000 (0:00:01.124) 0:09:45.662 ********** 2026-03-16 00:57:16.563565 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.563569 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.563573 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.563577 | orchestrator | 2026-03-16 00:57:16.563581 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-16 00:57:16.563585 | orchestrator | 2026-03-16 00:57:16.563589 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-16 00:57:16.563594 | orchestrator | Monday 16 March 2026 00:56:00 +0000 (0:00:00.931) 0:09:46.593 ********** 2026-03-16 00:57:16.563598 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.563602 | orchestrator | 2026-03-16 00:57:16.563606 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-16 00:57:16.563610 | orchestrator | Monday 16 March 2026 00:56:00 +0000 (0:00:00.555) 0:09:47.149 ********** 2026-03-16 00:57:16.563614 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.563618 | orchestrator | 2026-03-16 00:57:16.563622 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-16 00:57:16.563626 | orchestrator | Monday 16 March 2026 00:56:01 +0000 (0:00:00.832) 0:09:47.982 ********** 2026-03-16 00:57:16.563631 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.563635 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.563639 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.563643 | orchestrator | 2026-03-16 00:57:16.563647 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-16 00:57:16.563651 | orchestrator | Monday 16 March 2026 00:56:01 +0000 (0:00:00.379) 0:09:48.361 ********** 2026-03-16 00:57:16.563655 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.563659 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.563663 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.563667 | orchestrator | 2026-03-16 00:57:16.563671 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-16 00:57:16.563675 | orchestrator | Monday 16 March 2026 00:56:02 +0000 (0:00:00.784) 0:09:49.145 ********** 2026-03-16 00:57:16.563684 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.563688 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.563692 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.563696 | orchestrator | 2026-03-16 00:57:16.563700 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-16 00:57:16.563704 | orchestrator | Monday 16 March 2026 00:56:03 +0000 (0:00:01.042) 0:09:50.188 ********** 2026-03-16 00:57:16.563708 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.563712 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.563716 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.563720 | orchestrator | 2026-03-16 00:57:16.563724 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-16 00:57:16.563728 | orchestrator | Monday 16 March 2026 00:56:04 +0000 (0:00:00.741) 0:09:50.929 ********** 2026-03-16 00:57:16.563735 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.563740 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.563744 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.563748 | orchestrator | 2026-03-16 00:57:16.563752 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-16 00:57:16.563756 | orchestrator | Monday 16 March 2026 00:56:04 +0000 (0:00:00.350) 0:09:51.280 ********** 2026-03-16 00:57:16.563760 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.563764 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.563769 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.563773 | orchestrator | 2026-03-16 00:57:16.563777 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-16 00:57:16.563781 | orchestrator | Monday 16 March 2026 00:56:05 +0000 (0:00:00.379) 0:09:51.659 ********** 2026-03-16 00:57:16.563785 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.563789 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.563793 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.563797 | orchestrator | 2026-03-16 00:57:16.563802 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-16 00:57:16.563806 | orchestrator | Monday 16 March 2026 00:56:06 +0000 (0:00:00.747) 0:09:52.407 ********** 2026-03-16 00:57:16.563810 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.563814 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.563818 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.563822 | orchestrator | 2026-03-16 00:57:16.563828 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-16 00:57:16.563833 | orchestrator | Monday 16 March 2026 00:56:06 +0000 (0:00:00.775) 0:09:53.183 ********** 2026-03-16 00:57:16.563837 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.563841 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.563845 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.563849 | orchestrator | 2026-03-16 00:57:16.563853 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-16 00:57:16.563857 | orchestrator | Monday 16 March 2026 00:56:07 +0000 (0:00:00.806) 0:09:53.989 ********** 2026-03-16 00:57:16.563861 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.563865 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.563870 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.563874 | orchestrator | 2026-03-16 00:57:16.563878 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-16 00:57:16.563882 | orchestrator | Monday 16 March 2026 00:56:07 +0000 (0:00:00.344) 0:09:54.333 ********** 2026-03-16 00:57:16.563886 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.563890 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.563894 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.563898 | orchestrator | 2026-03-16 00:57:16.563902 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-16 00:57:16.563906 | orchestrator | Monday 16 March 2026 00:56:08 +0000 (0:00:00.654) 0:09:54.988 ********** 2026-03-16 00:57:16.563911 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.563915 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.563922 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.563926 | orchestrator | 2026-03-16 00:57:16.563930 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-16 00:57:16.563934 | orchestrator | Monday 16 March 2026 00:56:08 +0000 (0:00:00.350) 0:09:55.339 ********** 2026-03-16 00:57:16.563938 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.563942 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.563947 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.563951 | orchestrator | 2026-03-16 00:57:16.563955 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-16 00:57:16.563959 | orchestrator | Monday 16 March 2026 00:56:09 +0000 (0:00:00.372) 0:09:55.711 ********** 2026-03-16 00:57:16.563963 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.563967 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.563971 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.563975 | orchestrator | 2026-03-16 00:57:16.563980 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-16 00:57:16.563984 | orchestrator | Monday 16 March 2026 00:56:09 +0000 (0:00:00.344) 0:09:56.056 ********** 2026-03-16 00:57:16.563988 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.563992 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.563996 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.564000 | orchestrator | 2026-03-16 00:57:16.564004 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-16 00:57:16.564008 | orchestrator | Monday 16 March 2026 00:56:10 +0000 (0:00:00.659) 0:09:56.716 ********** 2026-03-16 00:57:16.564012 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.564016 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.564020 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.564025 | orchestrator | 2026-03-16 00:57:16.564029 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-16 00:57:16.564033 | orchestrator | Monday 16 March 2026 00:56:10 +0000 (0:00:00.340) 0:09:57.057 ********** 2026-03-16 00:57:16.564037 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.564041 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.564045 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.564049 | orchestrator | 2026-03-16 00:57:16.564053 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-16 00:57:16.564057 | orchestrator | Monday 16 March 2026 00:56:11 +0000 (0:00:00.325) 0:09:57.382 ********** 2026-03-16 00:57:16.564062 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.564066 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.564070 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.564077 | orchestrator | 2026-03-16 00:57:16.564083 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-16 00:57:16.564093 | orchestrator | Monday 16 March 2026 00:56:11 +0000 (0:00:00.390) 0:09:57.772 ********** 2026-03-16 00:57:16.564104 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.564110 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.564117 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.564123 | orchestrator | 2026-03-16 00:57:16.564130 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-16 00:57:16.564137 | orchestrator | Monday 16 March 2026 00:56:12 +0000 (0:00:00.894) 0:09:58.667 ********** 2026-03-16 00:57:16.564149 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.564157 | orchestrator | 2026-03-16 00:57:16.564163 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-16 00:57:16.564170 | orchestrator | Monday 16 March 2026 00:56:12 +0000 (0:00:00.544) 0:09:59.211 ********** 2026-03-16 00:57:16.564176 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.564183 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-16 00:57:16.564189 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-16 00:57:16.564201 | orchestrator | 2026-03-16 00:57:16.564208 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-16 00:57:16.564215 | orchestrator | Monday 16 March 2026 00:56:14 +0000 (0:00:02.130) 0:10:01.342 ********** 2026-03-16 00:57:16.564221 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-16 00:57:16.564228 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-16 00:57:16.564234 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.564240 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-16 00:57:16.564247 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-16 00:57:16.564254 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.564261 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-16 00:57:16.564267 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-16 00:57:16.564277 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.564284 | orchestrator | 2026-03-16 00:57:16.564290 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-16 00:57:16.564297 | orchestrator | Monday 16 March 2026 00:56:16 +0000 (0:00:01.464) 0:10:02.806 ********** 2026-03-16 00:57:16.564303 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.564310 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.564316 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.564324 | orchestrator | 2026-03-16 00:57:16.564330 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-16 00:57:16.564338 | orchestrator | Monday 16 March 2026 00:56:16 +0000 (0:00:00.346) 0:10:03.153 ********** 2026-03-16 00:57:16.564345 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.564351 | orchestrator | 2026-03-16 00:57:16.564359 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-16 00:57:16.564366 | orchestrator | Monday 16 March 2026 00:56:17 +0000 (0:00:00.545) 0:10:03.699 ********** 2026-03-16 00:57:16.564374 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.564382 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.564388 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.564395 | orchestrator | 2026-03-16 00:57:16.564403 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-16 00:57:16.564410 | orchestrator | Monday 16 March 2026 00:56:18 +0000 (0:00:01.359) 0:10:05.059 ********** 2026-03-16 00:57:16.564417 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.564424 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-16 00:57:16.564428 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.564433 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-16 00:57:16.564437 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.564441 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-16 00:57:16.564484 | orchestrator | 2026-03-16 00:57:16.564489 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-16 00:57:16.564493 | orchestrator | Monday 16 March 2026 00:56:23 +0000 (0:00:04.560) 0:10:09.619 ********** 2026-03-16 00:57:16.564497 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.564507 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-16 00:57:16.564511 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.564515 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-16 00:57:16.564519 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:57:16.564523 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-16 00:57:16.564527 | orchestrator | 2026-03-16 00:57:16.564532 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-16 00:57:16.564536 | orchestrator | Monday 16 March 2026 00:56:25 +0000 (0:00:02.513) 0:10:12.132 ********** 2026-03-16 00:57:16.564540 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-16 00:57:16.564544 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.564548 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-16 00:57:16.564552 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.564557 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-16 00:57:16.564561 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.564565 | orchestrator | 2026-03-16 00:57:16.564575 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-16 00:57:16.564580 | orchestrator | Monday 16 March 2026 00:56:27 +0000 (0:00:01.497) 0:10:13.629 ********** 2026-03-16 00:57:16.564584 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-16 00:57:16.564588 | orchestrator | 2026-03-16 00:57:16.564592 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-16 00:57:16.564596 | orchestrator | Monday 16 March 2026 00:56:27 +0000 (0:00:00.211) 0:10:13.841 ********** 2026-03-16 00:57:16.564600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-16 00:57:16.564605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-16 00:57:16.564609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-16 00:57:16.564617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-16 00:57:16.564621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-16 00:57:16.564626 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.564631 | orchestrator | 2026-03-16 00:57:16.564638 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-16 00:57:16.564648 | orchestrator | Monday 16 March 2026 00:56:28 +0000 (0:00:00.970) 0:10:14.811 ********** 2026-03-16 00:57:16.564657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-16 00:57:16.564663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-16 00:57:16.564671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-16 00:57:16.564679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-16 00:57:16.564686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-16 00:57:16.564694 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.564701 | orchestrator | 2026-03-16 00:57:16.564709 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-16 00:57:16.564713 | orchestrator | Monday 16 March 2026 00:56:29 +0000 (0:00:00.567) 0:10:15.378 ********** 2026-03-16 00:57:16.564723 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-16 00:57:16.564727 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-16 00:57:16.564731 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-16 00:57:16.564735 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-16 00:57:16.564739 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-16 00:57:16.564743 | orchestrator | 2026-03-16 00:57:16.564747 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-16 00:57:16.564752 | orchestrator | Monday 16 March 2026 00:57:00 +0000 (0:00:31.843) 0:10:47.222 ********** 2026-03-16 00:57:16.564756 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.564760 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.564764 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.564768 | orchestrator | 2026-03-16 00:57:16.564772 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-16 00:57:16.564776 | orchestrator | Monday 16 March 2026 00:57:01 +0000 (0:00:00.355) 0:10:47.577 ********** 2026-03-16 00:57:16.564780 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.564784 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.564788 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.564792 | orchestrator | 2026-03-16 00:57:16.564796 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-16 00:57:16.564800 | orchestrator | Monday 16 March 2026 00:57:01 +0000 (0:00:00.308) 0:10:47.885 ********** 2026-03-16 00:57:16.564804 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.564808 | orchestrator | 2026-03-16 00:57:16.564812 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-16 00:57:16.564816 | orchestrator | Monday 16 March 2026 00:57:02 +0000 (0:00:00.815) 0:10:48.700 ********** 2026-03-16 00:57:16.564825 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.564829 | orchestrator | 2026-03-16 00:57:16.564833 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-16 00:57:16.564837 | orchestrator | Monday 16 March 2026 00:57:02 +0000 (0:00:00.528) 0:10:49.229 ********** 2026-03-16 00:57:16.564841 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.564845 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.564849 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.564854 | orchestrator | 2026-03-16 00:57:16.564858 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-16 00:57:16.564862 | orchestrator | Monday 16 March 2026 00:57:04 +0000 (0:00:01.288) 0:10:50.518 ********** 2026-03-16 00:57:16.564866 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.564870 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.564874 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.564878 | orchestrator | 2026-03-16 00:57:16.564882 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-16 00:57:16.564886 | orchestrator | Monday 16 March 2026 00:57:05 +0000 (0:00:01.581) 0:10:52.099 ********** 2026-03-16 00:57:16.564890 | orchestrator | changed: [testbed-node-3] 2026-03-16 00:57:16.564895 | orchestrator | changed: [testbed-node-4] 2026-03-16 00:57:16.564898 | orchestrator | changed: [testbed-node-5] 2026-03-16 00:57:16.564902 | orchestrator | 2026-03-16 00:57:16.564916 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-16 00:57:16.564920 | orchestrator | Monday 16 March 2026 00:57:07 +0000 (0:00:01.875) 0:10:53.975 ********** 2026-03-16 00:57:16.564924 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.564928 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.564932 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-16 00:57:16.564937 | orchestrator | 2026-03-16 00:57:16.564941 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-16 00:57:16.564945 | orchestrator | Monday 16 March 2026 00:57:10 +0000 (0:00:02.764) 0:10:56.739 ********** 2026-03-16 00:57:16.564949 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.564953 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.564958 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.564967 | orchestrator | 2026-03-16 00:57:16.564976 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-16 00:57:16.564983 | orchestrator | Monday 16 March 2026 00:57:10 +0000 (0:00:00.365) 0:10:57.104 ********** 2026-03-16 00:57:16.564989 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:57:16.564996 | orchestrator | 2026-03-16 00:57:16.565002 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-16 00:57:16.565009 | orchestrator | Monday 16 March 2026 00:57:11 +0000 (0:00:00.523) 0:10:57.628 ********** 2026-03-16 00:57:16.565015 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.565021 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.565028 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.565032 | orchestrator | 2026-03-16 00:57:16.565036 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-16 00:57:16.565040 | orchestrator | Monday 16 March 2026 00:57:11 +0000 (0:00:00.685) 0:10:58.314 ********** 2026-03-16 00:57:16.565043 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.565047 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:57:16.565051 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:57:16.565054 | orchestrator | 2026-03-16 00:57:16.565058 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-16 00:57:16.565062 | orchestrator | Monday 16 March 2026 00:57:12 +0000 (0:00:00.383) 0:10:58.697 ********** 2026-03-16 00:57:16.565066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:57:16.565069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:57:16.565073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:57:16.565077 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:57:16.565081 | orchestrator | 2026-03-16 00:57:16.565084 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-16 00:57:16.565088 | orchestrator | Monday 16 March 2026 00:57:12 +0000 (0:00:00.603) 0:10:59.301 ********** 2026-03-16 00:57:16.565092 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:57:16.565096 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:57:16.565099 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:57:16.565103 | orchestrator | 2026-03-16 00:57:16.565107 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:57:16.565110 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-16 00:57:16.565115 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-16 00:57:16.565119 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-16 00:57:16.565127 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-16 00:57:16.565136 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-16 00:57:16.565140 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-16 00:57:16.565143 | orchestrator | 2026-03-16 00:57:16.565147 | orchestrator | 2026-03-16 00:57:16.565151 | orchestrator | 2026-03-16 00:57:16.565154 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:57:16.565158 | orchestrator | Monday 16 March 2026 00:57:13 +0000 (0:00:00.313) 0:10:59.614 ********** 2026-03-16 00:57:16.565162 | orchestrator | =============================================================================== 2026-03-16 00:57:16.565166 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 43.50s 2026-03-16 00:57:16.565169 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.04s 2026-03-16 00:57:16.565173 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.42s 2026-03-16 00:57:16.565177 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.84s 2026-03-16 00:57:16.565181 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.83s 2026-03-16 00:57:16.565184 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.43s 2026-03-16 00:57:16.565191 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.83s 2026-03-16 00:57:16.565195 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.61s 2026-03-16 00:57:16.565198 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.87s 2026-03-16 00:57:16.565202 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.22s 2026-03-16 00:57:16.565206 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.64s 2026-03-16 00:57:16.565210 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 5.64s 2026-03-16 00:57:16.565213 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.59s 2026-03-16 00:57:16.565217 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.56s 2026-03-16 00:57:16.565221 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.36s 2026-03-16 00:57:16.565225 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.21s 2026-03-16 00:57:16.565228 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.09s 2026-03-16 00:57:16.565232 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.88s 2026-03-16 00:57:16.565236 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.79s 2026-03-16 00:57:16.565240 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.60s 2026-03-16 00:57:16.565243 | orchestrator | 2026-03-16 00:57:16 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:16.565248 | orchestrator | 2026-03-16 00:57:16 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:16.565252 | orchestrator | 2026-03-16 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:19.601150 | orchestrator | 2026-03-16 00:57:19 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:19.601619 | orchestrator | 2026-03-16 00:57:19 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:19.603413 | orchestrator | 2026-03-16 00:57:19 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:19.603497 | orchestrator | 2026-03-16 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:22.644830 | orchestrator | 2026-03-16 00:57:22 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:22.646325 | orchestrator | 2026-03-16 00:57:22 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:22.648642 | orchestrator | 2026-03-16 00:57:22 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:22.648959 | orchestrator | 2026-03-16 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:25.699808 | orchestrator | 2026-03-16 00:57:25 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:25.701681 | orchestrator | 2026-03-16 00:57:25 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:25.705009 | orchestrator | 2026-03-16 00:57:25 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:25.705065 | orchestrator | 2026-03-16 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:28.757754 | orchestrator | 2026-03-16 00:57:28 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:28.759657 | orchestrator | 2026-03-16 00:57:28 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:28.761926 | orchestrator | 2026-03-16 00:57:28 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:28.761975 | orchestrator | 2026-03-16 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:31.819046 | orchestrator | 2026-03-16 00:57:31 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:31.820624 | orchestrator | 2026-03-16 00:57:31 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:31.823286 | orchestrator | 2026-03-16 00:57:31 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:31.823329 | orchestrator | 2026-03-16 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:34.878308 | orchestrator | 2026-03-16 00:57:34 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:34.880661 | orchestrator | 2026-03-16 00:57:34 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:34.881613 | orchestrator | 2026-03-16 00:57:34 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:34.882533 | orchestrator | 2026-03-16 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:37.938283 | orchestrator | 2026-03-16 00:57:37 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:37.941471 | orchestrator | 2026-03-16 00:57:37 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:37.942317 | orchestrator | 2026-03-16 00:57:37 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:37.942519 | orchestrator | 2026-03-16 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:40.976744 | orchestrator | 2026-03-16 00:57:40 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:40.978855 | orchestrator | 2026-03-16 00:57:40 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:40.980306 | orchestrator | 2026-03-16 00:57:40 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:40.980351 | orchestrator | 2026-03-16 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:44.028220 | orchestrator | 2026-03-16 00:57:44 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:44.031106 | orchestrator | 2026-03-16 00:57:44 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:44.032582 | orchestrator | 2026-03-16 00:57:44 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:44.033425 | orchestrator | 2026-03-16 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:47.065492 | orchestrator | 2026-03-16 00:57:47 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state STARTED 2026-03-16 00:57:47.135599 | orchestrator | 2026-03-16 00:57:47 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:47.135676 | orchestrator | 2026-03-16 00:57:47 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:47.135688 | orchestrator | 2026-03-16 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:50.114947 | orchestrator | 2026-03-16 00:57:50 | INFO  | Task da285a89-1082-48c5-8040-28776fd685e2 is in state SUCCESS 2026-03-16 00:57:50.115991 | orchestrator | 2026-03-16 00:57:50.116034 | orchestrator | 2026-03-16 00:57:50.116044 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 00:57:50.116052 | orchestrator | 2026-03-16 00:57:50.116059 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 00:57:50.116066 | orchestrator | Monday 16 March 2026 00:55:02 +0000 (0:00:00.261) 0:00:00.261 ********** 2026-03-16 00:57:50.116073 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:50.116081 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:57:50.116088 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:57:50.116094 | orchestrator | 2026-03-16 00:57:50.116101 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 00:57:50.116105 | orchestrator | Monday 16 March 2026 00:55:02 +0000 (0:00:00.335) 0:00:00.597 ********** 2026-03-16 00:57:50.116110 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-16 00:57:50.116115 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-16 00:57:50.116119 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-16 00:57:50.116123 | orchestrator | 2026-03-16 00:57:50.116127 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-16 00:57:50.116131 | orchestrator | 2026-03-16 00:57:50.116134 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-16 00:57:50.116138 | orchestrator | Monday 16 March 2026 00:55:03 +0000 (0:00:00.433) 0:00:01.030 ********** 2026-03-16 00:57:50.116142 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:50.116146 | orchestrator | 2026-03-16 00:57:50.116150 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-16 00:57:50.116154 | orchestrator | Monday 16 March 2026 00:55:03 +0000 (0:00:00.503) 0:00:01.534 ********** 2026-03-16 00:57:50.116158 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-16 00:57:50.116161 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-16 00:57:50.116165 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-16 00:57:50.116169 | orchestrator | 2026-03-16 00:57:50.116173 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-16 00:57:50.116179 | orchestrator | Monday 16 March 2026 00:55:05 +0000 (0:00:01.715) 0:00:03.250 ********** 2026-03-16 00:57:50.116206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116304 | orchestrator | 2026-03-16 00:57:50.116311 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-16 00:57:50.116317 | orchestrator | Monday 16 March 2026 00:55:07 +0000 (0:00:02.245) 0:00:05.495 ********** 2026-03-16 00:57:50.116324 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:50.116330 | orchestrator | 2026-03-16 00:57:50.116336 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-16 00:57:50.116342 | orchestrator | Monday 16 March 2026 00:55:08 +0000 (0:00:00.592) 0:00:06.087 ********** 2026-03-16 00:57:50.116353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116433 | orchestrator | 2026-03-16 00:57:50.116437 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-16 00:57:50.116441 | orchestrator | Monday 16 March 2026 00:55:11 +0000 (0:00:03.063) 0:00:09.151 ********** 2026-03-16 00:57:50.116449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-16 00:57:50.116453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-16 00:57:50.116458 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:50.116467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-16 00:57:50.116471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-16 00:57:50.116478 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:50.116485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-16 00:57:50.116489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-16 00:57:50.116493 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:50.116497 | orchestrator | 2026-03-16 00:57:50.116501 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-16 00:57:50.116504 | orchestrator | Monday 16 March 2026 00:55:12 +0000 (0:00:01.412) 0:00:10.564 ********** 2026-03-16 00:57:50.116512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-16 00:57:50.116516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-16 00:57:50.116523 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:50.116530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-16 00:57:50.116534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-16 00:57:50.116538 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:50.116546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-16 00:57:50.116552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-16 00:57:50.116562 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:50.116568 | orchestrator | 2026-03-16 00:57:50.116574 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-16 00:57:50.116580 | orchestrator | Monday 16 March 2026 00:55:13 +0000 (0:00:00.820) 0:00:11.384 ********** 2026-03-16 00:57:50.116590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116650 | orchestrator | 2026-03-16 00:57:50.116657 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-16 00:57:50.116662 | orchestrator | Monday 16 March 2026 00:55:15 +0000 (0:00:02.629) 0:00:14.014 ********** 2026-03-16 00:57:50.116666 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:50.116670 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:50.116675 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:50.116679 | orchestrator | 2026-03-16 00:57:50.116683 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-16 00:57:50.116688 | orchestrator | Monday 16 March 2026 00:55:18 +0000 (0:00:02.920) 0:00:16.935 ********** 2026-03-16 00:57:50.116692 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:50.116696 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:50.116700 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:50.116705 | orchestrator | 2026-03-16 00:57:50.116709 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-16 00:57:50.116713 | orchestrator | Monday 16 March 2026 00:55:21 +0000 (0:00:02.473) 0:00:19.408 ********** 2026-03-16 00:57:50.116724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-16 00:57:50.116746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-16 00:57:50.116771 | orchestrator | 2026-03-16 00:57:50.116775 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-16 00:57:50.116778 | orchestrator | Monday 16 March 2026 00:55:23 +0000 (0:00:02.194) 0:00:21.602 ********** 2026-03-16 00:57:50.116782 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:50.116786 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:57:50.116790 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:57:50.116793 | orchestrator | 2026-03-16 00:57:50.116797 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-16 00:57:50.116801 | orchestrator | Monday 16 March 2026 00:55:23 +0000 (0:00:00.261) 0:00:21.864 ********** 2026-03-16 00:57:50.116805 | orchestrator | 2026-03-16 00:57:50.116808 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-16 00:57:50.116812 | orchestrator | Monday 16 March 2026 00:55:23 +0000 (0:00:00.058) 0:00:21.923 ********** 2026-03-16 00:57:50.116816 | orchestrator | 2026-03-16 00:57:50.116819 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-16 00:57:50.116823 | orchestrator | Monday 16 March 2026 00:55:23 +0000 (0:00:00.058) 0:00:21.982 ********** 2026-03-16 00:57:50.116827 | orchestrator | 2026-03-16 00:57:50.116834 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-16 00:57:50.116838 | orchestrator | Monday 16 March 2026 00:55:24 +0000 (0:00:00.064) 0:00:22.046 ********** 2026-03-16 00:57:50.116841 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:50.116845 | orchestrator | 2026-03-16 00:57:50.116849 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-16 00:57:50.116852 | orchestrator | Monday 16 March 2026 00:55:24 +0000 (0:00:00.597) 0:00:22.644 ********** 2026-03-16 00:57:50.116856 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:57:50.116860 | orchestrator | 2026-03-16 00:57:50.116864 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-16 00:57:50.116868 | orchestrator | Monday 16 March 2026 00:55:25 +0000 (0:00:00.437) 0:00:23.081 ********** 2026-03-16 00:57:50.116872 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:50.116875 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:50.116879 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:50.116883 | orchestrator | 2026-03-16 00:57:50.116886 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-16 00:57:50.116890 | orchestrator | Monday 16 March 2026 00:56:24 +0000 (0:00:59.686) 0:01:22.767 ********** 2026-03-16 00:57:50.116898 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:50.116902 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:57:50.116905 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:57:50.116909 | orchestrator | 2026-03-16 00:57:50.116913 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-16 00:57:50.116917 | orchestrator | Monday 16 March 2026 00:57:36 +0000 (0:01:11.452) 0:02:34.220 ********** 2026-03-16 00:57:50.116921 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:57:50.116924 | orchestrator | 2026-03-16 00:57:50.116928 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-16 00:57:50.116932 | orchestrator | Monday 16 March 2026 00:57:36 +0000 (0:00:00.687) 0:02:34.907 ********** 2026-03-16 00:57:50.116936 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:50.116940 | orchestrator | 2026-03-16 00:57:50.116944 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-16 00:57:50.116947 | orchestrator | Monday 16 March 2026 00:57:39 +0000 (0:00:02.531) 0:02:37.438 ********** 2026-03-16 00:57:50.116951 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:57:50.116955 | orchestrator | 2026-03-16 00:57:50.116959 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-16 00:57:50.116962 | orchestrator | Monday 16 March 2026 00:57:41 +0000 (0:00:02.433) 0:02:39.872 ********** 2026-03-16 00:57:50.116967 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:50.116973 | orchestrator | 2026-03-16 00:57:50.116979 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-16 00:57:50.116985 | orchestrator | Monday 16 March 2026 00:57:44 +0000 (0:00:02.999) 0:02:42.872 ********** 2026-03-16 00:57:50.116990 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:57:50.116996 | orchestrator | 2026-03-16 00:57:50.117006 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:57:50.117015 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-16 00:57:50.117022 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-16 00:57:50.117028 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-16 00:57:50.117034 | orchestrator | 2026-03-16 00:57:50.117039 | orchestrator | 2026-03-16 00:57:50.117043 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:57:50.117047 | orchestrator | Monday 16 March 2026 00:57:47 +0000 (0:00:03.008) 0:02:45.881 ********** 2026-03-16 00:57:50.117051 | orchestrator | =============================================================================== 2026-03-16 00:57:50.117055 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 71.45s 2026-03-16 00:57:50.117058 | orchestrator | opensearch : Restart opensearch container ------------------------------ 59.69s 2026-03-16 00:57:50.117062 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.06s 2026-03-16 00:57:50.117066 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.01s 2026-03-16 00:57:50.117070 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.00s 2026-03-16 00:57:50.117074 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.92s 2026-03-16 00:57:50.117077 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.63s 2026-03-16 00:57:50.117081 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.53s 2026-03-16 00:57:50.117085 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.47s 2026-03-16 00:57:50.117088 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.43s 2026-03-16 00:57:50.117092 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.25s 2026-03-16 00:57:50.117100 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.19s 2026-03-16 00:57:50.117104 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.72s 2026-03-16 00:57:50.117108 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.41s 2026-03-16 00:57:50.117111 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.82s 2026-03-16 00:57:50.117115 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.69s 2026-03-16 00:57:50.117122 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.60s 2026-03-16 00:57:50.117126 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2026-03-16 00:57:50.117130 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-03-16 00:57:50.117133 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.44s 2026-03-16 00:57:50.117137 | orchestrator | 2026-03-16 00:57:50 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:50.118446 | orchestrator | 2026-03-16 00:57:50 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:50.118489 | orchestrator | 2026-03-16 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:53.162203 | orchestrator | 2026-03-16 00:57:53 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:53.165106 | orchestrator | 2026-03-16 00:57:53 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:53.165219 | orchestrator | 2026-03-16 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:56.213498 | orchestrator | 2026-03-16 00:57:56 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:56.214786 | orchestrator | 2026-03-16 00:57:56 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:56.214826 | orchestrator | 2026-03-16 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:57:59.260752 | orchestrator | 2026-03-16 00:57:59 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:57:59.262327 | orchestrator | 2026-03-16 00:57:59 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:57:59.262431 | orchestrator | 2026-03-16 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:02.307221 | orchestrator | 2026-03-16 00:58:02 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state STARTED 2026-03-16 00:58:02.307309 | orchestrator | 2026-03-16 00:58:02 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:02.307316 | orchestrator | 2026-03-16 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:05.360255 | orchestrator | 2026-03-16 00:58:05 | INFO  | Task 9e0e5ef7-0dd0-410b-8b6c-a3414979e90c is in state SUCCESS 2026-03-16 00:58:05.361063 | orchestrator | 2026-03-16 00:58:05.361095 | orchestrator | 2026-03-16 00:58:05.361103 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-16 00:58:05.361111 | orchestrator | 2026-03-16 00:58:05.361117 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-16 00:58:05.361125 | orchestrator | Monday 16 March 2026 00:55:02 +0000 (0:00:00.100) 0:00:00.100 ********** 2026-03-16 00:58:05.361132 | orchestrator | ok: [localhost] => { 2026-03-16 00:58:05.361140 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-16 00:58:05.361146 | orchestrator | } 2026-03-16 00:58:05.361153 | orchestrator | 2026-03-16 00:58:05.361160 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-16 00:58:05.361189 | orchestrator | Monday 16 March 2026 00:55:02 +0000 (0:00:00.064) 0:00:00.165 ********** 2026-03-16 00:58:05.361197 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-16 00:58:05.361205 | orchestrator | ...ignoring 2026-03-16 00:58:05.361211 | orchestrator | 2026-03-16 00:58:05.361217 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-16 00:58:05.361223 | orchestrator | Monday 16 March 2026 00:55:05 +0000 (0:00:02.875) 0:00:03.041 ********** 2026-03-16 00:58:05.361229 | orchestrator | skipping: [localhost] 2026-03-16 00:58:05.361235 | orchestrator | 2026-03-16 00:58:05.361275 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-16 00:58:05.361281 | orchestrator | Monday 16 March 2026 00:55:05 +0000 (0:00:00.072) 0:00:03.114 ********** 2026-03-16 00:58:05.361288 | orchestrator | ok: [localhost] 2026-03-16 00:58:05.361311 | orchestrator | 2026-03-16 00:58:05.361318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 00:58:05.361325 | orchestrator | 2026-03-16 00:58:05.361331 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 00:58:05.361339 | orchestrator | Monday 16 March 2026 00:55:05 +0000 (0:00:00.149) 0:00:03.264 ********** 2026-03-16 00:58:05.361361 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:58:05.361412 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:58:05.361419 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:58:05.361425 | orchestrator | 2026-03-16 00:58:05.361431 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 00:58:05.361437 | orchestrator | Monday 16 March 2026 00:55:05 +0000 (0:00:00.383) 0:00:03.648 ********** 2026-03-16 00:58:05.361444 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-16 00:58:05.361451 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-16 00:58:05.361457 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-16 00:58:05.361464 | orchestrator | 2026-03-16 00:58:05.361470 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-16 00:58:05.361477 | orchestrator | 2026-03-16 00:58:05.361483 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-16 00:58:05.361504 | orchestrator | Monday 16 March 2026 00:55:06 +0000 (0:00:01.018) 0:00:04.666 ********** 2026-03-16 00:58:05.361510 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-16 00:58:05.361518 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-16 00:58:05.361523 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-16 00:58:05.361530 | orchestrator | 2026-03-16 00:58:05.361535 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-16 00:58:05.361539 | orchestrator | Monday 16 March 2026 00:55:07 +0000 (0:00:00.405) 0:00:05.072 ********** 2026-03-16 00:58:05.361543 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:58:05.361548 | orchestrator | 2026-03-16 00:58:05.361552 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-16 00:58:05.361558 | orchestrator | Monday 16 March 2026 00:55:07 +0000 (0:00:00.582) 0:00:05.654 ********** 2026-03-16 00:58:05.361600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-16 00:58:05.361625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-16 00:58:05.361633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-16 00:58:05.361641 | orchestrator | 2026-03-16 00:58:05.361649 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-16 00:58:05.361653 | orchestrator | Monday 16 March 2026 00:55:10 +0000 (0:00:03.283) 0:00:08.938 ********** 2026-03-16 00:58:05.361657 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.361661 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.361664 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.361669 | orchestrator | 2026-03-16 00:58:05.361674 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-16 00:58:05.361678 | orchestrator | Monday 16 March 2026 00:55:11 +0000 (0:00:00.794) 0:00:09.732 ********** 2026-03-16 00:58:05.361683 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.361687 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.361691 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.361695 | orchestrator | 2026-03-16 00:58:05.361700 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-16 00:58:05.361704 | orchestrator | Monday 16 March 2026 00:55:13 +0000 (0:00:01.679) 0:00:11.412 ********** 2026-03-16 00:58:05.361712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-16 00:58:05.361722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-16 00:58:05.361734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-16 00:58:05.361738 | orchestrator | 2026-03-16 00:58:05.361743 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-16 00:58:05.361748 | orchestrator | Monday 16 March 2026 00:55:17 +0000 (0:00:03.763) 0:00:15.176 ********** 2026-03-16 00:58:05.361752 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.361756 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.361760 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.361765 | orchestrator | 2026-03-16 00:58:05.361769 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-16 00:58:05.361774 | orchestrator | Monday 16 March 2026 00:55:18 +0000 (0:00:01.310) 0:00:16.486 ********** 2026-03-16 00:58:05.361781 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.361786 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:58:05.361790 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:58:05.361794 | orchestrator | 2026-03-16 00:58:05.361799 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-16 00:58:05.361804 | orchestrator | Monday 16 March 2026 00:55:23 +0000 (0:00:04.894) 0:00:21.381 ********** 2026-03-16 00:58:05.361811 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:58:05.361817 | orchestrator | 2026-03-16 00:58:05.361821 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-16 00:58:05.361826 | orchestrator | Monday 16 March 2026 00:55:23 +0000 (0:00:00.461) 0:00:21.842 ********** 2026-03-16 00:58:05.361835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:58:05.361843 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:58:05.361853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:58:05.361866 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.361876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:58:05.361883 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.361888 | orchestrator | 2026-03-16 00:58:05.361893 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-16 00:58:05.361897 | orchestrator | Monday 16 March 2026 00:55:26 +0000 (0:00:02.995) 0:00:24.838 ********** 2026-03-16 00:58:05.361903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:58:05.361914 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.361924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:58:05.361930 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:58:05.361937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:58:05.361952 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.361958 | orchestrator | 2026-03-16 00:58:05.361962 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-16 00:58:05.361966 | orchestrator | Monday 16 March 2026 00:55:29 +0000 (0:00:02.256) 0:00:27.094 ********** 2026-03-16 00:58:05.361975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:58:05.361983 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.361990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:58:05.362002 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:58:05.362009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-16 00:58:05.362069 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.362077 | orchestrator | 2026-03-16 00:58:05.362083 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-16 00:58:05.362089 | orchestrator | Monday 16 March 2026 00:55:31 +0000 (0:00:02.576) 0:00:29.671 ********** 2026-03-16 00:58:05.362382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-16 00:58:05.362461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-16 00:58:05.362480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-16 00:58:05.362489 | orchestrator | 2026-03-16 00:58:05.362494 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-16 00:58:05.362499 | orchestrator | Monday 16 March 2026 00:55:34 +0000 (0:00:02.902) 0:00:32.574 ********** 2026-03-16 00:58:05.362503 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.362507 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:58:05.362511 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:58:05.362515 | orchestrator | 2026-03-16 00:58:05.362519 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-16 00:58:05.362523 | orchestrator | Monday 16 March 2026 00:55:35 +0000 (0:00:00.788) 0:00:33.362 ********** 2026-03-16 00:58:05.362526 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:58:05.362530 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:58:05.362534 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:58:05.362538 | orchestrator | 2026-03-16 00:58:05.362542 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-16 00:58:05.362549 | orchestrator | Monday 16 March 2026 00:55:35 +0000 (0:00:00.401) 0:00:33.764 ********** 2026-03-16 00:58:05.362553 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:58:05.362557 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:58:05.362561 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:58:05.362565 | orchestrator | 2026-03-16 00:58:05.362568 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-16 00:58:05.362572 | orchestrator | Monday 16 March 2026 00:55:36 +0000 (0:00:00.391) 0:00:34.155 ********** 2026-03-16 00:58:05.362577 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-16 00:58:05.362582 | orchestrator | ...ignoring 2026-03-16 00:58:05.362586 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-16 00:58:05.362590 | orchestrator | ...ignoring 2026-03-16 00:58:05.362594 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-16 00:58:05.362598 | orchestrator | ...ignoring 2026-03-16 00:58:05.362603 | orchestrator | 2026-03-16 00:58:05.362610 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-16 00:58:05.362627 | orchestrator | Monday 16 March 2026 00:55:47 +0000 (0:00:11.017) 0:00:45.173 ********** 2026-03-16 00:58:05.362636 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:58:05.362641 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:58:05.362647 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:58:05.362653 | orchestrator | 2026-03-16 00:58:05.362659 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-16 00:58:05.362664 | orchestrator | Monday 16 March 2026 00:55:47 +0000 (0:00:00.446) 0:00:45.619 ********** 2026-03-16 00:58:05.362670 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:58:05.362676 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.362682 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.362688 | orchestrator | 2026-03-16 00:58:05.362694 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-16 00:58:05.362700 | orchestrator | Monday 16 March 2026 00:55:48 +0000 (0:00:00.780) 0:00:46.400 ********** 2026-03-16 00:58:05.362706 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:58:05.362713 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.362719 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.362725 | orchestrator | 2026-03-16 00:58:05.362731 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-16 00:58:05.362737 | orchestrator | Monday 16 March 2026 00:55:48 +0000 (0:00:00.467) 0:00:46.868 ********** 2026-03-16 00:58:05.362743 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:58:05.362750 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.362755 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.362763 | orchestrator | 2026-03-16 00:58:05.362767 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-16 00:58:05.362776 | orchestrator | Monday 16 March 2026 00:55:49 +0000 (0:00:00.493) 0:00:47.361 ********** 2026-03-16 00:58:05.362780 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:58:05.362784 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:58:05.362787 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:58:05.362791 | orchestrator | 2026-03-16 00:58:05.362795 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-16 00:58:05.362799 | orchestrator | Monday 16 March 2026 00:55:49 +0000 (0:00:00.458) 0:00:47.819 ********** 2026-03-16 00:58:05.362804 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:58:05.362807 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.362811 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.362815 | orchestrator | 2026-03-16 00:58:05.362819 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-16 00:58:05.362823 | orchestrator | Monday 16 March 2026 00:55:50 +0000 (0:00:00.767) 0:00:48.587 ********** 2026-03-16 00:58:05.362828 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.362832 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.362836 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-16 00:58:05.362840 | orchestrator | 2026-03-16 00:58:05.362843 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-16 00:58:05.362847 | orchestrator | Monday 16 March 2026 00:55:51 +0000 (0:00:00.441) 0:00:49.028 ********** 2026-03-16 00:58:05.362851 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.362855 | orchestrator | 2026-03-16 00:58:05.362859 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-16 00:58:05.362862 | orchestrator | Monday 16 March 2026 00:56:02 +0000 (0:00:11.148) 0:01:00.177 ********** 2026-03-16 00:58:05.362866 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:58:05.362870 | orchestrator | 2026-03-16 00:58:05.362874 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-16 00:58:05.362878 | orchestrator | Monday 16 March 2026 00:56:02 +0000 (0:00:00.140) 0:01:00.317 ********** 2026-03-16 00:58:05.362882 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:58:05.362885 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.362889 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.362893 | orchestrator | 2026-03-16 00:58:05.362897 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-16 00:58:05.362901 | orchestrator | Monday 16 March 2026 00:56:03 +0000 (0:00:01.082) 0:01:01.400 ********** 2026-03-16 00:58:05.362905 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.362909 | orchestrator | 2026-03-16 00:58:05.362913 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-16 00:58:05.362916 | orchestrator | Monday 16 March 2026 00:56:11 +0000 (0:00:08.319) 0:01:09.720 ********** 2026-03-16 00:58:05.362920 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:58:05.362924 | orchestrator | 2026-03-16 00:58:05.362928 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-16 00:58:05.362931 | orchestrator | Monday 16 March 2026 00:56:13 +0000 (0:00:01.594) 0:01:11.315 ********** 2026-03-16 00:58:05.362939 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:58:05.362943 | orchestrator | 2026-03-16 00:58:05.362946 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-16 00:58:05.362950 | orchestrator | Monday 16 March 2026 00:56:15 +0000 (0:00:02.590) 0:01:13.905 ********** 2026-03-16 00:58:05.362954 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.362958 | orchestrator | 2026-03-16 00:58:05.362962 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-16 00:58:05.362966 | orchestrator | Monday 16 March 2026 00:56:16 +0000 (0:00:00.143) 0:01:14.048 ********** 2026-03-16 00:58:05.362970 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:58:05.362974 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.362982 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.362986 | orchestrator | 2026-03-16 00:58:05.362990 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-16 00:58:05.362994 | orchestrator | Monday 16 March 2026 00:56:16 +0000 (0:00:00.318) 0:01:14.367 ********** 2026-03-16 00:58:05.362998 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:58:05.363002 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-16 00:58:05.363006 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:58:05.363010 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:58:05.363014 | orchestrator | 2026-03-16 00:58:05.363018 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-16 00:58:05.363022 | orchestrator | skipping: no hosts matched 2026-03-16 00:58:05.363026 | orchestrator | 2026-03-16 00:58:05.363029 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-16 00:58:05.363033 | orchestrator | 2026-03-16 00:58:05.363037 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-16 00:58:05.363041 | orchestrator | Monday 16 March 2026 00:56:17 +0000 (0:00:00.658) 0:01:15.025 ********** 2026-03-16 00:58:05.363045 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:58:05.363049 | orchestrator | 2026-03-16 00:58:05.363053 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-16 00:58:05.363056 | orchestrator | Monday 16 March 2026 00:56:40 +0000 (0:00:23.160) 0:01:38.186 ********** 2026-03-16 00:58:05.363061 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:58:05.363064 | orchestrator | 2026-03-16 00:58:05.363068 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-16 00:58:05.363072 | orchestrator | Monday 16 March 2026 00:56:50 +0000 (0:00:10.646) 0:01:48.833 ********** 2026-03-16 00:58:05.363076 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:58:05.363080 | orchestrator | 2026-03-16 00:58:05.363083 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-16 00:58:05.363087 | orchestrator | 2026-03-16 00:58:05.363091 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-16 00:58:05.363095 | orchestrator | Monday 16 March 2026 00:56:53 +0000 (0:00:02.520) 0:01:51.354 ********** 2026-03-16 00:58:05.363099 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:58:05.363103 | orchestrator | 2026-03-16 00:58:05.363106 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-16 00:58:05.363113 | orchestrator | Monday 16 March 2026 00:57:11 +0000 (0:00:18.343) 0:02:09.697 ********** 2026-03-16 00:58:05.363117 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:58:05.363121 | orchestrator | 2026-03-16 00:58:05.363125 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-16 00:58:05.363129 | orchestrator | Monday 16 March 2026 00:57:28 +0000 (0:00:16.682) 0:02:26.380 ********** 2026-03-16 00:58:05.363132 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:58:05.363136 | orchestrator | 2026-03-16 00:58:05.363140 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-16 00:58:05.363144 | orchestrator | 2026-03-16 00:58:05.363148 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-16 00:58:05.363152 | orchestrator | Monday 16 March 2026 00:57:31 +0000 (0:00:02.877) 0:02:29.257 ********** 2026-03-16 00:58:05.363155 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.363159 | orchestrator | 2026-03-16 00:58:05.363163 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-16 00:58:05.363167 | orchestrator | Monday 16 March 2026 00:57:43 +0000 (0:00:11.864) 0:02:41.121 ********** 2026-03-16 00:58:05.363170 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:58:05.363174 | orchestrator | 2026-03-16 00:58:05.363178 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-16 00:58:05.363182 | orchestrator | Monday 16 March 2026 00:57:47 +0000 (0:00:04.591) 0:02:45.713 ********** 2026-03-16 00:58:05.363186 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:58:05.363194 | orchestrator | 2026-03-16 00:58:05.363198 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-16 00:58:05.363201 | orchestrator | 2026-03-16 00:58:05.363205 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-16 00:58:05.363209 | orchestrator | Monday 16 March 2026 00:57:49 +0000 (0:00:02.133) 0:02:47.846 ********** 2026-03-16 00:58:05.363213 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:58:05.363217 | orchestrator | 2026-03-16 00:58:05.363221 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-16 00:58:05.363225 | orchestrator | Monday 16 March 2026 00:57:50 +0000 (0:00:00.470) 0:02:48.317 ********** 2026-03-16 00:58:05.363229 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.363232 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.363236 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.363240 | orchestrator | 2026-03-16 00:58:05.363244 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-16 00:58:05.363248 | orchestrator | Monday 16 March 2026 00:57:52 +0000 (0:00:02.407) 0:02:50.725 ********** 2026-03-16 00:58:05.363252 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.363256 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.363259 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.363264 | orchestrator | 2026-03-16 00:58:05.363268 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-16 00:58:05.363272 | orchestrator | Monday 16 March 2026 00:57:55 +0000 (0:00:02.431) 0:02:53.156 ********** 2026-03-16 00:58:05.363276 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.363283 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.363287 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.363292 | orchestrator | 2026-03-16 00:58:05.363296 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-16 00:58:05.363300 | orchestrator | Monday 16 March 2026 00:57:57 +0000 (0:00:02.426) 0:02:55.583 ********** 2026-03-16 00:58:05.363305 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.363308 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.363312 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:58:05.363316 | orchestrator | 2026-03-16 00:58:05.363320 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-16 00:58:05.363324 | orchestrator | Monday 16 March 2026 00:58:00 +0000 (0:00:02.407) 0:02:57.990 ********** 2026-03-16 00:58:05.363328 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:58:05.363332 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:58:05.363336 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:58:05.363391 | orchestrator | 2026-03-16 00:58:05.363398 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-16 00:58:05.363402 | orchestrator | Monday 16 March 2026 00:58:03 +0000 (0:00:03.302) 0:03:01.293 ********** 2026-03-16 00:58:05.363406 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:58:05.363409 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:58:05.363413 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:58:05.363417 | orchestrator | 2026-03-16 00:58:05.363421 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:58:05.363424 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-16 00:58:05.363429 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-16 00:58:05.363434 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-16 00:58:05.363439 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-16 00:58:05.363447 | orchestrator | 2026-03-16 00:58:05.363451 | orchestrator | 2026-03-16 00:58:05.363455 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:58:05.363459 | orchestrator | Monday 16 March 2026 00:58:03 +0000 (0:00:00.244) 0:03:01.537 ********** 2026-03-16 00:58:05.363463 | orchestrator | =============================================================================== 2026-03-16 00:58:05.363466 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.50s 2026-03-16 00:58:05.363470 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 27.33s 2026-03-16 00:58:05.363477 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.86s 2026-03-16 00:58:05.363481 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.15s 2026-03-16 00:58:05.363485 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.02s 2026-03-16 00:58:05.363489 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.32s 2026-03-16 00:58:05.363493 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.40s 2026-03-16 00:58:05.363497 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.89s 2026-03-16 00:58:05.363501 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.59s 2026-03-16 00:58:05.363505 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.76s 2026-03-16 00:58:05.363509 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.30s 2026-03-16 00:58:05.363512 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.28s 2026-03-16 00:58:05.363516 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.00s 2026-03-16 00:58:05.363520 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.90s 2026-03-16 00:58:05.363525 | orchestrator | Check MariaDB service --------------------------------------------------- 2.88s 2026-03-16 00:58:05.363529 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.59s 2026-03-16 00:58:05.363532 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.58s 2026-03-16 00:58:05.363536 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.43s 2026-03-16 00:58:05.363540 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.43s 2026-03-16 00:58:05.363544 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.41s 2026-03-16 00:58:05.363548 | orchestrator | 2026-03-16 00:58:05 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:05.363903 | orchestrator | 2026-03-16 00:58:05 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:05.363951 | orchestrator | 2026-03-16 00:58:05 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:05.363961 | orchestrator | 2026-03-16 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:08.418470 | orchestrator | 2026-03-16 00:58:08 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:08.418962 | orchestrator | 2026-03-16 00:58:08 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:08.420172 | orchestrator | 2026-03-16 00:58:08 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:08.420197 | orchestrator | 2026-03-16 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:11.462377 | orchestrator | 2026-03-16 00:58:11 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:11.465167 | orchestrator | 2026-03-16 00:58:11 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:11.465795 | orchestrator | 2026-03-16 00:58:11 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:11.465989 | orchestrator | 2026-03-16 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:14.505765 | orchestrator | 2026-03-16 00:58:14 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:14.507491 | orchestrator | 2026-03-16 00:58:14 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:14.509870 | orchestrator | 2026-03-16 00:58:14 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:14.509929 | orchestrator | 2026-03-16 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:17.539829 | orchestrator | 2026-03-16 00:58:17 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:17.540651 | orchestrator | 2026-03-16 00:58:17 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:17.542003 | orchestrator | 2026-03-16 00:58:17 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:17.542109 | orchestrator | 2026-03-16 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:20.589940 | orchestrator | 2026-03-16 00:58:20 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:20.593181 | orchestrator | 2026-03-16 00:58:20 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:20.595035 | orchestrator | 2026-03-16 00:58:20 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:20.595572 | orchestrator | 2026-03-16 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:23.641986 | orchestrator | 2026-03-16 00:58:23 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:23.643023 | orchestrator | 2026-03-16 00:58:23 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:23.644757 | orchestrator | 2026-03-16 00:58:23 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:23.645786 | orchestrator | 2026-03-16 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:26.698254 | orchestrator | 2026-03-16 00:58:26 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:26.701379 | orchestrator | 2026-03-16 00:58:26 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:26.704662 | orchestrator | 2026-03-16 00:58:26 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:26.705238 | orchestrator | 2026-03-16 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:29.755760 | orchestrator | 2026-03-16 00:58:29 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:29.756611 | orchestrator | 2026-03-16 00:58:29 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:29.758314 | orchestrator | 2026-03-16 00:58:29 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:29.758375 | orchestrator | 2026-03-16 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:32.798109 | orchestrator | 2026-03-16 00:58:32 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:32.798476 | orchestrator | 2026-03-16 00:58:32 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:32.801784 | orchestrator | 2026-03-16 00:58:32 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:32.801897 | orchestrator | 2026-03-16 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:35.844889 | orchestrator | 2026-03-16 00:58:35 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:35.845231 | orchestrator | 2026-03-16 00:58:35 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:35.846117 | orchestrator | 2026-03-16 00:58:35 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:35.846178 | orchestrator | 2026-03-16 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:38.893015 | orchestrator | 2026-03-16 00:58:38 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:38.895151 | orchestrator | 2026-03-16 00:58:38 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:38.897139 | orchestrator | 2026-03-16 00:58:38 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:38.897455 | orchestrator | 2026-03-16 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:41.936800 | orchestrator | 2026-03-16 00:58:41 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:41.937911 | orchestrator | 2026-03-16 00:58:41 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:41.939361 | orchestrator | 2026-03-16 00:58:41 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:41.939396 | orchestrator | 2026-03-16 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:44.980826 | orchestrator | 2026-03-16 00:58:44 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:44.982141 | orchestrator | 2026-03-16 00:58:44 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:44.983014 | orchestrator | 2026-03-16 00:58:44 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:44.983033 | orchestrator | 2026-03-16 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:48.024421 | orchestrator | 2026-03-16 00:58:48 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:48.026724 | orchestrator | 2026-03-16 00:58:48 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:48.028591 | orchestrator | 2026-03-16 00:58:48 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:48.028659 | orchestrator | 2026-03-16 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:51.069471 | orchestrator | 2026-03-16 00:58:51 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:51.070408 | orchestrator | 2026-03-16 00:58:51 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:51.072760 | orchestrator | 2026-03-16 00:58:51 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:51.072883 | orchestrator | 2026-03-16 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:54.114299 | orchestrator | 2026-03-16 00:58:54 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:54.115371 | orchestrator | 2026-03-16 00:58:54 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:54.116648 | orchestrator | 2026-03-16 00:58:54 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:54.116693 | orchestrator | 2026-03-16 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:58:57.153825 | orchestrator | 2026-03-16 00:58:57 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:58:57.155662 | orchestrator | 2026-03-16 00:58:57 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:58:57.157108 | orchestrator | 2026-03-16 00:58:57 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:58:57.157151 | orchestrator | 2026-03-16 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:00.197007 | orchestrator | 2026-03-16 00:59:00 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:00.198568 | orchestrator | 2026-03-16 00:59:00 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:59:00.198610 | orchestrator | 2026-03-16 00:59:00 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:00.198618 | orchestrator | 2026-03-16 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:03.243079 | orchestrator | 2026-03-16 00:59:03 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:03.243863 | orchestrator | 2026-03-16 00:59:03 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:59:03.245660 | orchestrator | 2026-03-16 00:59:03 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:03.245706 | orchestrator | 2026-03-16 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:06.292149 | orchestrator | 2026-03-16 00:59:06 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:06.294664 | orchestrator | 2026-03-16 00:59:06 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:59:06.296644 | orchestrator | 2026-03-16 00:59:06 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:06.296695 | orchestrator | 2026-03-16 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:09.333766 | orchestrator | 2026-03-16 00:59:09 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:09.335145 | orchestrator | 2026-03-16 00:59:09 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:59:09.336763 | orchestrator | 2026-03-16 00:59:09 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:09.336803 | orchestrator | 2026-03-16 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:12.375394 | orchestrator | 2026-03-16 00:59:12 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:12.376439 | orchestrator | 2026-03-16 00:59:12 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:59:12.377678 | orchestrator | 2026-03-16 00:59:12 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:12.377715 | orchestrator | 2026-03-16 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:15.422276 | orchestrator | 2026-03-16 00:59:15 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:15.423516 | orchestrator | 2026-03-16 00:59:15 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:59:15.424859 | orchestrator | 2026-03-16 00:59:15 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:15.424908 | orchestrator | 2026-03-16 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:18.467955 | orchestrator | 2026-03-16 00:59:18 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:18.470453 | orchestrator | 2026-03-16 00:59:18 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:59:18.471928 | orchestrator | 2026-03-16 00:59:18 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:18.472000 | orchestrator | 2026-03-16 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:21.515975 | orchestrator | 2026-03-16 00:59:21 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:21.517543 | orchestrator | 2026-03-16 00:59:21 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:59:21.519065 | orchestrator | 2026-03-16 00:59:21 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:21.519109 | orchestrator | 2026-03-16 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:24.564518 | orchestrator | 2026-03-16 00:59:24 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:24.565467 | orchestrator | 2026-03-16 00:59:24 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:59:24.567790 | orchestrator | 2026-03-16 00:59:24 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:24.567827 | orchestrator | 2026-03-16 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:27.602388 | orchestrator | 2026-03-16 00:59:27 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:27.603425 | orchestrator | 2026-03-16 00:59:27 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:59:27.606727 | orchestrator | 2026-03-16 00:59:27 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:27.606796 | orchestrator | 2026-03-16 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:30.646643 | orchestrator | 2026-03-16 00:59:30 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:30.648125 | orchestrator | 2026-03-16 00:59:30 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state STARTED 2026-03-16 00:59:30.650905 | orchestrator | 2026-03-16 00:59:30 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:30.650985 | orchestrator | 2026-03-16 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:33.701564 | orchestrator | 2026-03-16 00:59:33 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:33.705529 | orchestrator | 2026-03-16 00:59:33 | INFO  | Task 6550caf0-7d1a-48ab-bad2-baccec0559ae is in state SUCCESS 2026-03-16 00:59:33.706940 | orchestrator | 2026-03-16 00:59:33.706996 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-16 00:59:33.707002 | orchestrator | 2.16.14 2026-03-16 00:59:33.707007 | orchestrator | 2026-03-16 00:59:33.707011 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-16 00:59:33.707016 | orchestrator | 2026-03-16 00:59:33.707021 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-16 00:59:33.707025 | orchestrator | Monday 16 March 2026 00:57:18 +0000 (0:00:00.574) 0:00:00.574 ********** 2026-03-16 00:59:33.707029 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:59:33.707034 | orchestrator | 2026-03-16 00:59:33.707039 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-16 00:59:33.707043 | orchestrator | Monday 16 March 2026 00:57:19 +0000 (0:00:00.636) 0:00:01.211 ********** 2026-03-16 00:59:33.707047 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.707070 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.707074 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.707077 | orchestrator | 2026-03-16 00:59:33.707081 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-16 00:59:33.707085 | orchestrator | Monday 16 March 2026 00:57:19 +0000 (0:00:00.654) 0:00:01.865 ********** 2026-03-16 00:59:33.707089 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.707093 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.707096 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.707100 | orchestrator | 2026-03-16 00:59:33.707104 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-16 00:59:33.707108 | orchestrator | Monday 16 March 2026 00:57:19 +0000 (0:00:00.306) 0:00:02.171 ********** 2026-03-16 00:59:33.707112 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.707115 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.707119 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.707123 | orchestrator | 2026-03-16 00:59:33.707127 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-16 00:59:33.707130 | orchestrator | Monday 16 March 2026 00:57:20 +0000 (0:00:00.937) 0:00:03.109 ********** 2026-03-16 00:59:33.707134 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.707138 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.707142 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.707145 | orchestrator | 2026-03-16 00:59:33.707168 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-16 00:59:33.707172 | orchestrator | Monday 16 March 2026 00:57:21 +0000 (0:00:00.332) 0:00:03.442 ********** 2026-03-16 00:59:33.707176 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.707180 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.707184 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.707187 | orchestrator | 2026-03-16 00:59:33.707191 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-16 00:59:33.707195 | orchestrator | Monday 16 March 2026 00:57:21 +0000 (0:00:00.309) 0:00:03.751 ********** 2026-03-16 00:59:33.707199 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.707202 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.707206 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.707210 | orchestrator | 2026-03-16 00:59:33.707214 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-16 00:59:33.707217 | orchestrator | Monday 16 March 2026 00:57:21 +0000 (0:00:00.320) 0:00:04.072 ********** 2026-03-16 00:59:33.707221 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707226 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.707229 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.707233 | orchestrator | 2026-03-16 00:59:33.707237 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-16 00:59:33.707241 | orchestrator | Monday 16 March 2026 00:57:22 +0000 (0:00:00.490) 0:00:04.563 ********** 2026-03-16 00:59:33.707245 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.707249 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.707252 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.707256 | orchestrator | 2026-03-16 00:59:33.707260 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-16 00:59:33.707264 | orchestrator | Monday 16 March 2026 00:57:22 +0000 (0:00:00.311) 0:00:04.874 ********** 2026-03-16 00:59:33.707268 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-16 00:59:33.707271 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-16 00:59:33.707275 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-16 00:59:33.707279 | orchestrator | 2026-03-16 00:59:33.707282 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-16 00:59:33.707286 | orchestrator | Monday 16 March 2026 00:57:23 +0000 (0:00:00.677) 0:00:05.552 ********** 2026-03-16 00:59:33.707290 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.707299 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.707302 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.707306 | orchestrator | 2026-03-16 00:59:33.707339 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-16 00:59:33.707343 | orchestrator | Monday 16 March 2026 00:57:23 +0000 (0:00:00.455) 0:00:06.007 ********** 2026-03-16 00:59:33.707347 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-16 00:59:33.707360 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-16 00:59:33.707364 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-16 00:59:33.707368 | orchestrator | 2026-03-16 00:59:33.707372 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-16 00:59:33.707376 | orchestrator | Monday 16 March 2026 00:57:27 +0000 (0:00:03.234) 0:00:09.242 ********** 2026-03-16 00:59:33.707394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-16 00:59:33.707399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-16 00:59:33.707403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-16 00:59:33.707406 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707410 | orchestrator | 2026-03-16 00:59:33.707424 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-16 00:59:33.707428 | orchestrator | Monday 16 March 2026 00:57:27 +0000 (0:00:00.653) 0:00:09.895 ********** 2026-03-16 00:59:33.707434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.707440 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.707444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.707448 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707452 | orchestrator | 2026-03-16 00:59:33.707456 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-16 00:59:33.707460 | orchestrator | Monday 16 March 2026 00:57:28 +0000 (0:00:00.804) 0:00:10.700 ********** 2026-03-16 00:59:33.707465 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.707472 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.707476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.707484 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707487 | orchestrator | 2026-03-16 00:59:33.707491 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-16 00:59:33.707495 | orchestrator | Monday 16 March 2026 00:57:28 +0000 (0:00:00.357) 0:00:11.058 ********** 2026-03-16 00:59:33.707500 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '09b2eea58edb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-16 00:57:24.477637', 'end': '2026-03-16 00:57:25.517001', 'delta': '0:00:01.039364', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['09b2eea58edb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-16 00:59:33.707510 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fb32c1392897', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-16 00:57:26.293123', 'end': '2026-03-16 00:57:26.337509', 'delta': '0:00:00.044386', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fb32c1392897'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-16 00:59:33.707518 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cdfddb202929', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-16 00:57:26.850775', 'end': '2026-03-16 00:57:26.888743', 'delta': '0:00:00.037968', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cdfddb202929'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-16 00:59:33.707522 | orchestrator | 2026-03-16 00:59:33.707526 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-16 00:59:33.707530 | orchestrator | Monday 16 March 2026 00:57:29 +0000 (0:00:00.201) 0:00:11.259 ********** 2026-03-16 00:59:33.707533 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.707537 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.707541 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.707545 | orchestrator | 2026-03-16 00:59:33.707549 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-16 00:59:33.707552 | orchestrator | Monday 16 March 2026 00:57:29 +0000 (0:00:00.458) 0:00:11.717 ********** 2026-03-16 00:59:33.707556 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-16 00:59:33.707560 | orchestrator | 2026-03-16 00:59:33.707564 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-16 00:59:33.707567 | orchestrator | Monday 16 March 2026 00:57:31 +0000 (0:00:02.466) 0:00:14.184 ********** 2026-03-16 00:59:33.707571 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707575 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.707579 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.707583 | orchestrator | 2026-03-16 00:59:33.707586 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-16 00:59:33.707590 | orchestrator | Monday 16 March 2026 00:57:32 +0000 (0:00:00.305) 0:00:14.489 ********** 2026-03-16 00:59:33.707597 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707601 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.707605 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.707609 | orchestrator | 2026-03-16 00:59:33.707612 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-16 00:59:33.707616 | orchestrator | Monday 16 March 2026 00:57:32 +0000 (0:00:00.447) 0:00:14.936 ********** 2026-03-16 00:59:33.707620 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707624 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.707627 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.707631 | orchestrator | 2026-03-16 00:59:33.707635 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-16 00:59:33.707639 | orchestrator | Monday 16 March 2026 00:57:33 +0000 (0:00:00.537) 0:00:15.474 ********** 2026-03-16 00:59:33.707643 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.707646 | orchestrator | 2026-03-16 00:59:33.707650 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-16 00:59:33.707654 | orchestrator | Monday 16 March 2026 00:57:33 +0000 (0:00:00.140) 0:00:15.615 ********** 2026-03-16 00:59:33.707658 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707661 | orchestrator | 2026-03-16 00:59:33.707665 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-16 00:59:33.707797 | orchestrator | Monday 16 March 2026 00:57:33 +0000 (0:00:00.245) 0:00:15.860 ********** 2026-03-16 00:59:33.707816 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707822 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.707844 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.707859 | orchestrator | 2026-03-16 00:59:33.707865 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-16 00:59:33.707871 | orchestrator | Monday 16 March 2026 00:57:33 +0000 (0:00:00.317) 0:00:16.178 ********** 2026-03-16 00:59:33.707877 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707883 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.707889 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.707895 | orchestrator | 2026-03-16 00:59:33.707901 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-16 00:59:33.707907 | orchestrator | Monday 16 March 2026 00:57:34 +0000 (0:00:00.319) 0:00:16.498 ********** 2026-03-16 00:59:33.707913 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707919 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.707925 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.707931 | orchestrator | 2026-03-16 00:59:33.707937 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-16 00:59:33.707942 | orchestrator | Monday 16 March 2026 00:57:34 +0000 (0:00:00.548) 0:00:17.046 ********** 2026-03-16 00:59:33.707948 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707954 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.707960 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.707966 | orchestrator | 2026-03-16 00:59:33.707978 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-16 00:59:33.707984 | orchestrator | Monday 16 March 2026 00:57:35 +0000 (0:00:00.350) 0:00:17.396 ********** 2026-03-16 00:59:33.707991 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.707995 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.707999 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.708003 | orchestrator | 2026-03-16 00:59:33.708007 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-16 00:59:33.708011 | orchestrator | Monday 16 March 2026 00:57:35 +0000 (0:00:00.338) 0:00:17.734 ********** 2026-03-16 00:59:33.708015 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.708018 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.708022 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.708031 | orchestrator | 2026-03-16 00:59:33.708035 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-16 00:59:33.708044 | orchestrator | Monday 16 March 2026 00:57:35 +0000 (0:00:00.339) 0:00:18.074 ********** 2026-03-16 00:59:33.708048 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.708051 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.708055 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.708059 | orchestrator | 2026-03-16 00:59:33.708062 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-16 00:59:33.708066 | orchestrator | Monday 16 March 2026 00:57:36 +0000 (0:00:00.413) 0:00:18.487 ********** 2026-03-16 00:59:33.708071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71e0430a--6bf1--53ec--905e--7c884e89f784-osd--block--71e0430a--6bf1--53ec--905e--7c884e89f784', 'dm-uuid-LVM-wfWQF1CMpG436vHAFB7PLE7Lu4MagAEY3zN1PL2no4vUlqjNM9LHgIqpZ4CT6Met'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--40b418b1--0bd6--568c--82b5--8ddc4abd3365-osd--block--40b418b1--0bd6--568c--82b5--8ddc4abd3365', 'dm-uuid-LVM-i8dnrqRhoTtIY3c7MgqceRpLo4rsKyC9qSnNF0kDUEfKM0Wf1KtCNzervq0fTfSo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ded6401a--969b--5c16--b1be--1b69fe43ded8-osd--block--ded6401a--969b--5c16--b1be--1b69fe43ded8', 'dm-uuid-LVM-C6h8PY31H7NF0avlMPMNuk3fumzXPicAnoRcVmPbxL43O22LzoMTek6lfK0ZLeGD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01ad088d--533b--5bd8--92eb--284afc0ad32d-osd--block--01ad088d--533b--5bd8--92eb--284afc0ad32d', 'dm-uuid-LVM-0sBWRcIEYVfhS9z0btZt3E1nbLdVN1xAXwO6Fyl2iDazFEDpyXKpLNzLrLEf9N8c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--71e0430a--6bf1--53ec--905e--7c884e89f784-osd--block--71e0430a--6bf1--53ec--905e--7c884e89f784'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MQHZCZ-2P0q-WEBW-lB0Y-5ZU4-EERo-X0rt2s', 'scsi-0QEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9', 'scsi-SQEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--40b418b1--0bd6--568c--82b5--8ddc4abd3365-osd--block--40b418b1--0bd6--568c--82b5--8ddc4abd3365'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZoV2Pm-dKR1-PRe1-hXHc-O2KZ-sJw6-5NOhRq', 'scsi-0QEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c', 'scsi-SQEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f', 'scsi-SQEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708288 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.708299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ded6401a--969b--5c16--b1be--1b69fe43ded8-osd--block--ded6401a--969b--5c16--b1be--1b69fe43ded8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aGd2Ie-b0yj-4Gpc-NVJZ-kPi6-fxvA-wG3FaP', 'scsi-0QEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358', 'scsi-SQEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--01ad088d--533b--5bd8--92eb--284afc0ad32d-osd--block--01ad088d--533b--5bd8--92eb--284afc0ad32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vB3OGU-zA3Q-mHqp-oSQI-LWGG-LkAy-H1f9lO', 'scsi-0QEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649', 'scsi-SQEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20eacd0a--f744--531e--8511--c5afb936ef86-osd--block--20eacd0a--f744--531e--8511--c5afb936ef86', 'dm-uuid-LVM-lSXyLnOov7r2zaqmGGp5HpJcdapQhsc2WIkvSCn26GbMJKocRSo2V2ZbLipfysP3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22', 'scsi-SQEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2695ca6--70a1--5c1a--b7de--886954e6bf07-osd--block--c2695ca6--70a1--5c1a--b7de--886954e6bf07', 'dm-uuid-LVM-m5OlQhBlwbjaWKZJHpKDAF3Qtrt8tOpo0N4ndCl65u5FrpPM2sAQRID2cruaNFRe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708640 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.708644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-16 00:59:33.708682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part1', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part14', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part15', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part16', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--20eacd0a--f744--531e--8511--c5afb936ef86-osd--block--20eacd0a--f744--531e--8511--c5afb936ef86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZyUzGW-HjlS-XV4V-WGhw-az3f-AHso-PXH4dy', 'scsi-0QEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096', 'scsi-SQEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c2695ca6--70a1--5c1a--b7de--886954e6bf07-osd--block--c2695ca6--70a1--5c1a--b7de--886954e6bf07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ilfj6s-Om41-y7OG-sdvd-dNA1-ULZC-j6tQ2n', 'scsi-0QEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a', 'scsi-SQEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7', 'scsi-SQEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-16 00:59:33.708713 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.708717 | orchestrator | 2026-03-16 00:59:33.708721 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-16 00:59:33.708725 | orchestrator | Monday 16 March 2026 00:57:36 +0000 (0:00:00.437) 0:00:18.925 ********** 2026-03-16 00:59:33.708729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71e0430a--6bf1--53ec--905e--7c884e89f784-osd--block--71e0430a--6bf1--53ec--905e--7c884e89f784', 'dm-uuid-LVM-wfWQF1CMpG436vHAFB7PLE7Lu4MagAEY3zN1PL2no4vUlqjNM9LHgIqpZ4CT6Met'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--40b418b1--0bd6--568c--82b5--8ddc4abd3365-osd--block--40b418b1--0bd6--568c--82b5--8ddc4abd3365', 'dm-uuid-LVM-i8dnrqRhoTtIY3c7MgqceRpLo4rsKyC9qSnNF0kDUEfKM0Wf1KtCNzervq0fTfSo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708739 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708753 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708760 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708768 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ded6401a--969b--5c16--b1be--1b69fe43ded8-osd--block--ded6401a--969b--5c16--b1be--1b69fe43ded8', 'dm-uuid-LVM-C6h8PY31H7NF0avlMPMNuk3fumzXPicAnoRcVmPbxL43O22LzoMTek6lfK0ZLeGD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01ad088d--533b--5bd8--92eb--284afc0ad32d-osd--block--01ad088d--533b--5bd8--92eb--284afc0ad32d', 'dm-uuid-LVM-0sBWRcIEYVfhS9z0btZt3E1nbLdVN1xAXwO6Fyl2iDazFEDpyXKpLNzLrLEf9N8c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708784 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708802 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708809 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708815 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708843 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20eacd0a--f744--531e--8511--c5afb936ef86-osd--block--20eacd0a--f744--531e--8511--c5afb936ef86', 'dm-uuid-LVM-lSXyLnOov7r2zaqmGGp5HpJcdapQhsc2WIkvSCn26GbMJKocRSo2V2ZbLipfysP3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708849 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708854 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6150f17a-ba1c-4854-a1c2-519cb1eb76a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708869 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2695ca6--70a1--5c1a--b7de--886954e6bf07-osd--block--c2695ca6--70a1--5c1a--b7de--886954e6bf07', 'dm-uuid-LVM-m5OlQhBlwbjaWKZJHpKDAF3Qtrt8tOpo0N4ndCl65u5FrpPM2sAQRID2cruaNFRe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708875 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ded6401a--969b--5c16--b1be--1b69fe43ded8-osd--block--ded6401a--969b--5c16--b1be--1b69fe43ded8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aGd2Ie-b0yj-4Gpc-NVJZ-kPi6-fxvA-wG3FaP', 'scsi-0QEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358', 'scsi-SQEMU_QEMU_HARDDISK_dd732262-e9ae-4e48-8009-641fb05b3358'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708880 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708891 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--01ad088d--533b--5bd8--92eb--284afc0ad32d-osd--block--01ad088d--533b--5bd8--92eb--284afc0ad32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vB3OGU-zA3Q-mHqp-oSQI-LWGG-LkAy-H1f9lO', 'scsi-0QEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649', 'scsi-SQEMU_QEMU_HARDDISK_1db695b4-2be8-41cf-b2f3-0a666ad94649'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708895 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708909 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22', 'scsi-SQEMU_QEMU_HARDDISK_e5bc35b8-8936-4f39-b3b2-4c8e21a1af22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708917 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708924 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708928 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.708932 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb083b0a-67c1-4bfc-bfe8-3ba6a29986fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708957 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--71e0430a--6bf1--53ec--905e--7c884e89f784-osd--block--71e0430a--6bf1--53ec--905e--7c884e89f784'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MQHZCZ-2P0q-WEBW-lB0Y-5ZU4-EERo-X0rt2s', 'scsi-0QEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9', 'scsi-SQEMU_QEMU_HARDDISK_ea250d8e-8a1a-4b7c-87ac-b5f969c8dfc9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--40b418b1--0bd6--568c--82b5--8ddc4abd3365-osd--block--40b418b1--0bd6--568c--82b5--8ddc4abd3365'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZoV2Pm-dKR1-PRe1-hXHc-O2KZ-sJw6-5NOhRq', 'scsi-0QEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c', 'scsi-SQEMU_QEMU_HARDDISK_638de7de-7e30-41bf-b0e2-bce66f40688c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.708988 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part1', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part14', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part15', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part16', 'scsi-SQEMU_QEMU_HARDDISK_9bd3d9e7-1ffb-4a02-abf6-245dbd9bc055-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.709005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f', 'scsi-SQEMU_QEMU_HARDDISK_8261b325-336c-474c-bfd4-8f783607e19f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.709011 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--20eacd0a--f744--531e--8511--c5afb936ef86-osd--block--20eacd0a--f744--531e--8511--c5afb936ef86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZyUzGW-HjlS-XV4V-WGhw-az3f-AHso-PXH4dy', 'scsi-0QEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096', 'scsi-SQEMU_QEMU_HARDDISK_da655a5c-29e3-4c18-87b3-c0b6111b4096'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.709017 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.709026 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c2695ca6--70a1--5c1a--b7de--886954e6bf07-osd--block--c2695ca6--70a1--5c1a--b7de--886954e6bf07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ilfj6s-Om41-y7OG-sdvd-dNA1-ULZC-j6tQ2n', 'scsi-0QEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a', 'scsi-SQEMU_QEMU_HARDDISK_75257afc-ff3d-423c-9b8c-9aa6b4de753a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.709033 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709042 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7', 'scsi-SQEMU_QEMU_HARDDISK_573bd76d-2068-40ae-bffe-bd7cc0e0b9d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.709051 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-16-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-16 00:59:33.709059 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.709087 | orchestrator | 2026-03-16 00:59:33.709093 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-16 00:59:33.709100 | orchestrator | Monday 16 March 2026 00:57:37 +0000 (0:00:00.516) 0:00:19.442 ********** 2026-03-16 00:59:33.709105 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.709109 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.709113 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.709117 | orchestrator | 2026-03-16 00:59:33.709120 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-16 00:59:33.709136 | orchestrator | Monday 16 March 2026 00:57:37 +0000 (0:00:00.669) 0:00:20.111 ********** 2026-03-16 00:59:33.709144 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.709168 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.709172 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.709176 | orchestrator | 2026-03-16 00:59:33.709180 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-16 00:59:33.709184 | orchestrator | Monday 16 March 2026 00:57:38 +0000 (0:00:00.430) 0:00:20.542 ********** 2026-03-16 00:59:33.709187 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.709191 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.709195 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.709198 | orchestrator | 2026-03-16 00:59:33.709202 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-16 00:59:33.709206 | orchestrator | Monday 16 March 2026 00:57:39 +0000 (0:00:01.542) 0:00:22.084 ********** 2026-03-16 00:59:33.709209 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709213 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.709217 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.709221 | orchestrator | 2026-03-16 00:59:33.709224 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-16 00:59:33.709228 | orchestrator | Monday 16 March 2026 00:57:40 +0000 (0:00:00.280) 0:00:22.365 ********** 2026-03-16 00:59:33.709232 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709237 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.709241 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.709245 | orchestrator | 2026-03-16 00:59:33.709249 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-16 00:59:33.709254 | orchestrator | Monday 16 March 2026 00:57:40 +0000 (0:00:00.384) 0:00:22.750 ********** 2026-03-16 00:59:33.709258 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709262 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.709266 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.709270 | orchestrator | 2026-03-16 00:59:33.709274 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-16 00:59:33.709279 | orchestrator | Monday 16 March 2026 00:57:41 +0000 (0:00:00.532) 0:00:23.283 ********** 2026-03-16 00:59:33.709283 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-16 00:59:33.709288 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-16 00:59:33.709293 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-16 00:59:33.709297 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-16 00:59:33.709301 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-16 00:59:33.709306 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-16 00:59:33.709310 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-16 00:59:33.709314 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-16 00:59:33.709318 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-16 00:59:33.709323 | orchestrator | 2026-03-16 00:59:33.709327 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-16 00:59:33.709331 | orchestrator | Monday 16 March 2026 00:57:41 +0000 (0:00:00.871) 0:00:24.154 ********** 2026-03-16 00:59:33.709336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-16 00:59:33.709340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-16 00:59:33.709344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-16 00:59:33.709349 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709353 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-16 00:59:33.709358 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-16 00:59:33.709362 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-16 00:59:33.709366 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.709370 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-16 00:59:33.709380 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-16 00:59:33.709384 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-16 00:59:33.709391 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.709395 | orchestrator | 2026-03-16 00:59:33.709398 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-16 00:59:33.709402 | orchestrator | Monday 16 March 2026 00:57:42 +0000 (0:00:00.360) 0:00:24.514 ********** 2026-03-16 00:59:33.709407 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 00:59:33.709411 | orchestrator | 2026-03-16 00:59:33.709415 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-16 00:59:33.709420 | orchestrator | Monday 16 March 2026 00:57:43 +0000 (0:00:00.732) 0:00:25.247 ********** 2026-03-16 00:59:33.709427 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709431 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.709435 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.709439 | orchestrator | 2026-03-16 00:59:33.709443 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-16 00:59:33.709446 | orchestrator | Monday 16 March 2026 00:57:43 +0000 (0:00:00.334) 0:00:25.581 ********** 2026-03-16 00:59:33.709450 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709454 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.709458 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.709462 | orchestrator | 2026-03-16 00:59:33.709465 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-16 00:59:33.709469 | orchestrator | Monday 16 March 2026 00:57:43 +0000 (0:00:00.326) 0:00:25.908 ********** 2026-03-16 00:59:33.709473 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709477 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.709481 | orchestrator | skipping: [testbed-node-5] 2026-03-16 00:59:33.709485 | orchestrator | 2026-03-16 00:59:33.709488 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-16 00:59:33.709492 | orchestrator | Monday 16 March 2026 00:57:44 +0000 (0:00:00.337) 0:00:26.246 ********** 2026-03-16 00:59:33.709496 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.709500 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.709504 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.709507 | orchestrator | 2026-03-16 00:59:33.709511 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-16 00:59:33.709515 | orchestrator | Monday 16 March 2026 00:57:44 +0000 (0:00:00.655) 0:00:26.902 ********** 2026-03-16 00:59:33.709519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:59:33.709523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:59:33.709526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:59:33.709530 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709534 | orchestrator | 2026-03-16 00:59:33.709538 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-16 00:59:33.709541 | orchestrator | Monday 16 March 2026 00:57:45 +0000 (0:00:00.397) 0:00:27.299 ********** 2026-03-16 00:59:33.709545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:59:33.709549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:59:33.709553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:59:33.709557 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709560 | orchestrator | 2026-03-16 00:59:33.709564 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-16 00:59:33.709568 | orchestrator | Monday 16 March 2026 00:57:45 +0000 (0:00:00.374) 0:00:27.674 ********** 2026-03-16 00:59:33.709572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-16 00:59:33.709576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-16 00:59:33.709583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-16 00:59:33.709587 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709591 | orchestrator | 2026-03-16 00:59:33.709594 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-16 00:59:33.709598 | orchestrator | Monday 16 March 2026 00:57:45 +0000 (0:00:00.359) 0:00:28.033 ********** 2026-03-16 00:59:33.709602 | orchestrator | ok: [testbed-node-3] 2026-03-16 00:59:33.709606 | orchestrator | ok: [testbed-node-4] 2026-03-16 00:59:33.709609 | orchestrator | ok: [testbed-node-5] 2026-03-16 00:59:33.709613 | orchestrator | 2026-03-16 00:59:33.709617 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-16 00:59:33.709621 | orchestrator | Monday 16 March 2026 00:57:46 +0000 (0:00:00.284) 0:00:28.318 ********** 2026-03-16 00:59:33.709624 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-16 00:59:33.709628 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-16 00:59:33.709632 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-16 00:59:33.709636 | orchestrator | 2026-03-16 00:59:33.709639 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-16 00:59:33.709643 | orchestrator | Monday 16 March 2026 00:57:46 +0000 (0:00:00.465) 0:00:28.783 ********** 2026-03-16 00:59:33.709647 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-16 00:59:33.709651 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-16 00:59:33.709655 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-16 00:59:33.709659 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-16 00:59:33.709663 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-16 00:59:33.709666 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-16 00:59:33.709670 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-16 00:59:33.709674 | orchestrator | 2026-03-16 00:59:33.709678 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-16 00:59:33.709684 | orchestrator | Monday 16 March 2026 00:57:47 +0000 (0:00:00.864) 0:00:29.648 ********** 2026-03-16 00:59:33.709688 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-16 00:59:33.709692 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-16 00:59:33.709696 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-16 00:59:33.709700 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-16 00:59:33.709703 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-16 00:59:33.709707 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-16 00:59:33.709713 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-16 00:59:33.709717 | orchestrator | 2026-03-16 00:59:33.709721 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-16 00:59:33.709725 | orchestrator | Monday 16 March 2026 00:57:49 +0000 (0:00:01.724) 0:00:31.372 ********** 2026-03-16 00:59:33.709729 | orchestrator | skipping: [testbed-node-3] 2026-03-16 00:59:33.709732 | orchestrator | skipping: [testbed-node-4] 2026-03-16 00:59:33.709736 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-16 00:59:33.709740 | orchestrator | 2026-03-16 00:59:33.709744 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-16 00:59:33.709748 | orchestrator | Monday 16 March 2026 00:57:49 +0000 (0:00:00.333) 0:00:31.705 ********** 2026-03-16 00:59:33.709753 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-16 00:59:33.709760 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-16 00:59:33.709765 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-16 00:59:33.709768 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-16 00:59:33.709772 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-16 00:59:33.709776 | orchestrator | 2026-03-16 00:59:33.709780 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-16 00:59:33.709784 | orchestrator | Monday 16 March 2026 00:58:34 +0000 (0:00:44.879) 0:01:16.584 ********** 2026-03-16 00:59:33.709788 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709791 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709795 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709799 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709803 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709806 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709810 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-16 00:59:33.709814 | orchestrator | 2026-03-16 00:59:33.709818 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-16 00:59:33.709821 | orchestrator | Monday 16 March 2026 00:58:59 +0000 (0:00:24.908) 0:01:41.493 ********** 2026-03-16 00:59:33.709825 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709829 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709833 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709836 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709840 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709844 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709848 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-16 00:59:33.709852 | orchestrator | 2026-03-16 00:59:33.709859 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-16 00:59:33.709863 | orchestrator | Monday 16 March 2026 00:59:12 +0000 (0:00:12.848) 0:01:54.342 ********** 2026-03-16 00:59:33.709866 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709870 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-16 00:59:33.709874 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-16 00:59:33.709882 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709886 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-16 00:59:33.709893 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-16 00:59:33.709897 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709900 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-16 00:59:33.709904 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-16 00:59:33.709908 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709912 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-16 00:59:33.709916 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-16 00:59:33.709919 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709923 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-16 00:59:33.709927 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-16 00:59:33.709931 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-16 00:59:33.709934 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-16 00:59:33.709938 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-16 00:59:33.709942 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-16 00:59:33.709946 | orchestrator | 2026-03-16 00:59:33.709950 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:59:33.709953 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-16 00:59:33.709958 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-16 00:59:33.709962 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-16 00:59:33.709966 | orchestrator | 2026-03-16 00:59:33.709970 | orchestrator | 2026-03-16 00:59:33.709974 | orchestrator | 2026-03-16 00:59:33.709978 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:59:33.709984 | orchestrator | Monday 16 March 2026 00:59:30 +0000 (0:00:18.005) 0:02:12.347 ********** 2026-03-16 00:59:33.709990 | orchestrator | =============================================================================== 2026-03-16 00:59:33.709996 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.88s 2026-03-16 00:59:33.710002 | orchestrator | generate keys ---------------------------------------------------------- 24.91s 2026-03-16 00:59:33.710008 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.01s 2026-03-16 00:59:33.710058 | orchestrator | get keys from monitors ------------------------------------------------- 12.85s 2026-03-16 00:59:33.710063 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.23s 2026-03-16 00:59:33.710068 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.47s 2026-03-16 00:59:33.710072 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.72s 2026-03-16 00:59:33.710076 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.54s 2026-03-16 00:59:33.710079 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.94s 2026-03-16 00:59:33.710083 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2026-03-16 00:59:33.710087 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.86s 2026-03-16 00:59:33.710095 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2026-03-16 00:59:33.710099 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2026-03-16 00:59:33.710103 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2026-03-16 00:59:33.710107 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.67s 2026-03-16 00:59:33.710111 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.66s 2026-03-16 00:59:33.710114 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2026-03-16 00:59:33.710118 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.65s 2026-03-16 00:59:33.710122 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2026-03-16 00:59:33.710125 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.55s 2026-03-16 00:59:33.710402 | orchestrator | 2026-03-16 00:59:33 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:33.712291 | orchestrator | 2026-03-16 00:59:33 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 00:59:33.712328 | orchestrator | 2026-03-16 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:36.771835 | orchestrator | 2026-03-16 00:59:36 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:36.773802 | orchestrator | 2026-03-16 00:59:36 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:36.777745 | orchestrator | 2026-03-16 00:59:36 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 00:59:36.777834 | orchestrator | 2026-03-16 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:39.827580 | orchestrator | 2026-03-16 00:59:39 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:39.828255 | orchestrator | 2026-03-16 00:59:39 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:39.830228 | orchestrator | 2026-03-16 00:59:39 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 00:59:39.830278 | orchestrator | 2026-03-16 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:42.875855 | orchestrator | 2026-03-16 00:59:42 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:42.878962 | orchestrator | 2026-03-16 00:59:42 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:42.881686 | orchestrator | 2026-03-16 00:59:42 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 00:59:42.881729 | orchestrator | 2026-03-16 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:45.926147 | orchestrator | 2026-03-16 00:59:45 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:45.927640 | orchestrator | 2026-03-16 00:59:45 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state STARTED 2026-03-16 00:59:45.929347 | orchestrator | 2026-03-16 00:59:45 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 00:59:45.929699 | orchestrator | 2026-03-16 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:48.983598 | orchestrator | 2026-03-16 00:59:48 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:48.985686 | orchestrator | 2026-03-16 00:59:48 | INFO  | Task 563ab3b3-a0b9-4e1b-bf47-1fceb456dd39 is in state SUCCESS 2026-03-16 00:59:48.989025 | orchestrator | 2026-03-16 00:59:48.989103 | orchestrator | 2026-03-16 00:59:48.989179 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 00:59:48.989236 | orchestrator | 2026-03-16 00:59:48.989249 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 00:59:48.989259 | orchestrator | Monday 16 March 2026 00:58:08 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-03-16 00:59:48.989266 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.989274 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.989281 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.989322 | orchestrator | 2026-03-16 00:59:48.989330 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 00:59:48.989337 | orchestrator | Monday 16 March 2026 00:58:08 +0000 (0:00:00.302) 0:00:00.560 ********** 2026-03-16 00:59:48.989407 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-16 00:59:48.989413 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-16 00:59:48.989418 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-16 00:59:48.989422 | orchestrator | 2026-03-16 00:59:48.989427 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-16 00:59:48.989432 | orchestrator | 2026-03-16 00:59:48.989436 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-16 00:59:48.989441 | orchestrator | Monday 16 March 2026 00:58:09 +0000 (0:00:00.442) 0:00:01.003 ********** 2026-03-16 00:59:48.989446 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:59:48.989452 | orchestrator | 2026-03-16 00:59:48.989457 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-16 00:59:48.989461 | orchestrator | Monday 16 March 2026 00:58:09 +0000 (0:00:00.522) 0:00:01.525 ********** 2026-03-16 00:59:48.989485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:59:48.989509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:59:48.989527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:59:48.989544 | orchestrator | 2026-03-16 00:59:48.989553 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-16 00:59:48.989561 | orchestrator | Monday 16 March 2026 00:58:10 +0000 (0:00:01.187) 0:00:02.712 ********** 2026-03-16 00:59:48.989568 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.989574 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.989581 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.989588 | orchestrator | 2026-03-16 00:59:48.989595 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-16 00:59:48.989602 | orchestrator | Monday 16 March 2026 00:58:11 +0000 (0:00:00.479) 0:00:03.192 ********** 2026-03-16 00:59:48.989616 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-16 00:59:48.989623 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-16 00:59:48.989655 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-16 00:59:48.989663 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-16 00:59:48.989671 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-16 00:59:48.989679 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-16 00:59:48.989687 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-16 00:59:48.989695 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-16 00:59:48.989703 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-16 00:59:48.989712 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-16 00:59:48.989716 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-16 00:59:48.989721 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-16 00:59:48.989726 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-16 00:59:48.989730 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-16 00:59:48.989735 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-16 00:59:48.989739 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-16 00:59:48.989744 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-16 00:59:48.989749 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-16 00:59:48.989754 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-16 00:59:48.989758 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-16 00:59:48.989767 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-16 00:59:48.989772 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-16 00:59:48.989776 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-16 00:59:48.989781 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-16 00:59:48.989786 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-16 00:59:48.989792 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-16 00:59:48.989818 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-16 00:59:48.989824 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-16 00:59:48.989828 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-16 00:59:48.989833 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-16 00:59:48.989837 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-16 00:59:48.989842 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-16 00:59:48.989846 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-16 00:59:48.989852 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-16 00:59:48.989857 | orchestrator | 2026-03-16 00:59:48.989861 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-16 00:59:48.989866 | orchestrator | Monday 16 March 2026 00:58:12 +0000 (0:00:00.698) 0:00:03.891 ********** 2026-03-16 00:59:48.989870 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.989875 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.989880 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.989884 | orchestrator | 2026-03-16 00:59:48.989889 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-16 00:59:48.989893 | orchestrator | Monday 16 March 2026 00:58:12 +0000 (0:00:00.314) 0:00:04.205 ********** 2026-03-16 00:59:48.989903 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.989908 | orchestrator | 2026-03-16 00:59:48.989913 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-16 00:59:48.989917 | orchestrator | Monday 16 March 2026 00:58:12 +0000 (0:00:00.131) 0:00:04.336 ********** 2026-03-16 00:59:48.989922 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.989926 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.989931 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.989935 | orchestrator | 2026-03-16 00:59:48.989940 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-16 00:59:48.989944 | orchestrator | Monday 16 March 2026 00:58:12 +0000 (0:00:00.485) 0:00:04.822 ********** 2026-03-16 00:59:48.989949 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.989954 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.989958 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.989963 | orchestrator | 2026-03-16 00:59:48.989967 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-16 00:59:48.989972 | orchestrator | Monday 16 March 2026 00:58:13 +0000 (0:00:00.307) 0:00:05.129 ********** 2026-03-16 00:59:48.989976 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.989981 | orchestrator | 2026-03-16 00:59:48.989985 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-16 00:59:48.989990 | orchestrator | Monday 16 March 2026 00:58:13 +0000 (0:00:00.119) 0:00:05.249 ********** 2026-03-16 00:59:48.989994 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.989999 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.990003 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.990008 | orchestrator | 2026-03-16 00:59:48.990044 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-16 00:59:48.990055 | orchestrator | Monday 16 March 2026 00:58:13 +0000 (0:00:00.325) 0:00:05.574 ********** 2026-03-16 00:59:48.990060 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.990065 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.990069 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.990074 | orchestrator | 2026-03-16 00:59:48.990079 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-16 00:59:48.990083 | orchestrator | Monday 16 March 2026 00:58:14 +0000 (0:00:00.312) 0:00:05.887 ********** 2026-03-16 00:59:48.990088 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990092 | orchestrator | 2026-03-16 00:59:48.990097 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-16 00:59:48.990101 | orchestrator | Monday 16 March 2026 00:58:14 +0000 (0:00:00.310) 0:00:06.197 ********** 2026-03-16 00:59:48.990106 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990156 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.990163 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.990168 | orchestrator | 2026-03-16 00:59:48.990172 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-16 00:59:48.990177 | orchestrator | Monday 16 March 2026 00:58:14 +0000 (0:00:00.293) 0:00:06.491 ********** 2026-03-16 00:59:48.990181 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.990186 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.990190 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.990195 | orchestrator | 2026-03-16 00:59:48.990199 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-16 00:59:48.990204 | orchestrator | Monday 16 March 2026 00:58:14 +0000 (0:00:00.301) 0:00:06.793 ********** 2026-03-16 00:59:48.990208 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990213 | orchestrator | 2026-03-16 00:59:48.990218 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-16 00:59:48.990222 | orchestrator | Monday 16 March 2026 00:58:15 +0000 (0:00:00.135) 0:00:06.929 ********** 2026-03-16 00:59:48.990227 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990231 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.990236 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.990240 | orchestrator | 2026-03-16 00:59:48.990245 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-16 00:59:48.990249 | orchestrator | Monday 16 March 2026 00:58:15 +0000 (0:00:00.339) 0:00:07.268 ********** 2026-03-16 00:59:48.990254 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.990259 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.990263 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.990268 | orchestrator | 2026-03-16 00:59:48.990272 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-16 00:59:48.990277 | orchestrator | Monday 16 March 2026 00:58:15 +0000 (0:00:00.488) 0:00:07.756 ********** 2026-03-16 00:59:48.990281 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990286 | orchestrator | 2026-03-16 00:59:48.990290 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-16 00:59:48.990295 | orchestrator | Monday 16 March 2026 00:58:16 +0000 (0:00:00.140) 0:00:07.896 ********** 2026-03-16 00:59:48.990300 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990304 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.990309 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.990313 | orchestrator | 2026-03-16 00:59:48.990318 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-16 00:59:48.990322 | orchestrator | Monday 16 March 2026 00:58:16 +0000 (0:00:00.316) 0:00:08.213 ********** 2026-03-16 00:59:48.990327 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.990331 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.990336 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.990340 | orchestrator | 2026-03-16 00:59:48.990345 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-16 00:59:48.990349 | orchestrator | Monday 16 March 2026 00:58:16 +0000 (0:00:00.300) 0:00:08.513 ********** 2026-03-16 00:59:48.990358 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990362 | orchestrator | 2026-03-16 00:59:48.990367 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-16 00:59:48.990371 | orchestrator | Monday 16 March 2026 00:58:16 +0000 (0:00:00.144) 0:00:08.658 ********** 2026-03-16 00:59:48.990376 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990380 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.990385 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.990389 | orchestrator | 2026-03-16 00:59:48.990394 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-16 00:59:48.990414 | orchestrator | Monday 16 March 2026 00:58:17 +0000 (0:00:00.280) 0:00:08.939 ********** 2026-03-16 00:59:48.990419 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.990424 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.990428 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.990433 | orchestrator | 2026-03-16 00:59:48.990437 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-16 00:59:48.990442 | orchestrator | Monday 16 March 2026 00:58:17 +0000 (0:00:00.505) 0:00:09.444 ********** 2026-03-16 00:59:48.990446 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990451 | orchestrator | 2026-03-16 00:59:48.990455 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-16 00:59:48.990460 | orchestrator | Monday 16 March 2026 00:58:17 +0000 (0:00:00.122) 0:00:09.567 ********** 2026-03-16 00:59:48.990464 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990469 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.990473 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.990478 | orchestrator | 2026-03-16 00:59:48.990483 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-16 00:59:48.990487 | orchestrator | Monday 16 March 2026 00:58:17 +0000 (0:00:00.297) 0:00:09.864 ********** 2026-03-16 00:59:48.990492 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.990496 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.990501 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.990505 | orchestrator | 2026-03-16 00:59:48.990510 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-16 00:59:48.990514 | orchestrator | Monday 16 March 2026 00:58:18 +0000 (0:00:00.325) 0:00:10.189 ********** 2026-03-16 00:59:48.990519 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990523 | orchestrator | 2026-03-16 00:59:48.990528 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-16 00:59:48.990532 | orchestrator | Monday 16 March 2026 00:58:18 +0000 (0:00:00.141) 0:00:10.331 ********** 2026-03-16 00:59:48.990537 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990542 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.990546 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.990551 | orchestrator | 2026-03-16 00:59:48.990555 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-16 00:59:48.990560 | orchestrator | Monday 16 March 2026 00:58:18 +0000 (0:00:00.489) 0:00:10.820 ********** 2026-03-16 00:59:48.990564 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.990569 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.990573 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.990578 | orchestrator | 2026-03-16 00:59:48.990586 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-16 00:59:48.990590 | orchestrator | Monday 16 March 2026 00:58:19 +0000 (0:00:00.317) 0:00:11.137 ********** 2026-03-16 00:59:48.990595 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990599 | orchestrator | 2026-03-16 00:59:48.990604 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-16 00:59:48.990608 | orchestrator | Monday 16 March 2026 00:58:19 +0000 (0:00:00.119) 0:00:11.256 ********** 2026-03-16 00:59:48.990613 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990617 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.990626 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.990630 | orchestrator | 2026-03-16 00:59:48.990635 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-16 00:59:48.990639 | orchestrator | Monday 16 March 2026 00:58:19 +0000 (0:00:00.320) 0:00:11.577 ********** 2026-03-16 00:59:48.990644 | orchestrator | ok: [testbed-node-0] 2026-03-16 00:59:48.990648 | orchestrator | ok: [testbed-node-1] 2026-03-16 00:59:48.990653 | orchestrator | ok: [testbed-node-2] 2026-03-16 00:59:48.990657 | orchestrator | 2026-03-16 00:59:48.990662 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-16 00:59:48.990666 | orchestrator | Monday 16 March 2026 00:58:20 +0000 (0:00:00.308) 0:00:11.886 ********** 2026-03-16 00:59:48.990671 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990675 | orchestrator | 2026-03-16 00:59:48.990680 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-16 00:59:48.990685 | orchestrator | Monday 16 March 2026 00:58:20 +0000 (0:00:00.124) 0:00:12.011 ********** 2026-03-16 00:59:48.990689 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990694 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.990698 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.990703 | orchestrator | 2026-03-16 00:59:48.990707 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-16 00:59:48.990712 | orchestrator | Monday 16 March 2026 00:58:20 +0000 (0:00:00.558) 0:00:12.570 ********** 2026-03-16 00:59:48.990717 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:59:48.990721 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:59:48.990725 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:59:48.990730 | orchestrator | 2026-03-16 00:59:48.990734 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-16 00:59:48.990739 | orchestrator | Monday 16 March 2026 00:58:22 +0000 (0:00:01.667) 0:00:14.238 ********** 2026-03-16 00:59:48.990744 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-16 00:59:48.990748 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-16 00:59:48.990753 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-16 00:59:48.990757 | orchestrator | 2026-03-16 00:59:48.990762 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-16 00:59:48.990766 | orchestrator | Monday 16 March 2026 00:58:24 +0000 (0:00:01.930) 0:00:16.168 ********** 2026-03-16 00:59:48.990771 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-16 00:59:48.990776 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-16 00:59:48.990781 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-16 00:59:48.990786 | orchestrator | 2026-03-16 00:59:48.990794 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-16 00:59:48.990798 | orchestrator | Monday 16 March 2026 00:58:26 +0000 (0:00:02.717) 0:00:18.885 ********** 2026-03-16 00:59:48.990803 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-16 00:59:48.990808 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-16 00:59:48.990812 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-16 00:59:48.990817 | orchestrator | 2026-03-16 00:59:48.990821 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-16 00:59:48.990826 | orchestrator | Monday 16 March 2026 00:58:29 +0000 (0:00:02.159) 0:00:21.044 ********** 2026-03-16 00:59:48.990830 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990835 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.990839 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.990847 | orchestrator | 2026-03-16 00:59:48.990852 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-16 00:59:48.990857 | orchestrator | Monday 16 March 2026 00:58:29 +0000 (0:00:00.329) 0:00:21.374 ********** 2026-03-16 00:59:48.990861 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.990866 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.990870 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.990875 | orchestrator | 2026-03-16 00:59:48.990879 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-16 00:59:48.990884 | orchestrator | Monday 16 March 2026 00:58:29 +0000 (0:00:00.283) 0:00:21.657 ********** 2026-03-16 00:59:48.990889 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:59:48.990893 | orchestrator | 2026-03-16 00:59:48.990898 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-16 00:59:48.990903 | orchestrator | Monday 16 March 2026 00:58:30 +0000 (0:00:00.802) 0:00:22.460 ********** 2026-03-16 00:59:48.990917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:59:48.990933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:59:48.990956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:59:48.990965 | orchestrator | 2026-03-16 00:59:48.990972 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-16 00:59:48.990979 | orchestrator | Monday 16 March 2026 00:58:32 +0000 (0:00:01.798) 0:00:24.258 ********** 2026-03-16 00:59:48.990996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-16 00:59:48.991010 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.991022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-16 00:59:48.991034 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.991045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-16 00:59:48.991055 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.991060 | orchestrator | 2026-03-16 00:59:48.991064 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-16 00:59:48.991069 | orchestrator | Monday 16 March 2026 00:58:33 +0000 (0:00:00.686) 0:00:24.945 ********** 2026-03-16 00:59:48.991078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-16 00:59:48.991086 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.991094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-16 00:59:48.991099 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.991108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-16 00:59:48.991134 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.991139 | orchestrator | 2026-03-16 00:59:48.991144 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-16 00:59:48.991152 | orchestrator | Monday 16 March 2026 00:58:33 +0000 (0:00:00.885) 0:00:25.831 ********** 2026-03-16 00:59:48.991164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:59:48.991177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:59:48.991196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-16 00:59:48.991210 | orchestrator | 2026-03-16 00:59:48.991218 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-16 00:59:48.991225 | orchestrator | Monday 16 March 2026 00:58:35 +0000 (0:00:01.849) 0:00:27.680 ********** 2026-03-16 00:59:48.991232 | orchestrator | skipping: [testbed-node-0] 2026-03-16 00:59:48.991239 | orchestrator | skipping: [testbed-node-1] 2026-03-16 00:59:48.991246 | orchestrator | skipping: [testbed-node-2] 2026-03-16 00:59:48.991255 | orchestrator | 2026-03-16 00:59:48.991262 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-16 00:59:48.991274 | orchestrator | Monday 16 March 2026 00:58:36 +0000 (0:00:00.463) 0:00:28.143 ********** 2026-03-16 00:59:48.991282 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 00:59:48.991299 | orchestrator | 2026-03-16 00:59:48.991316 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-16 00:59:48.991323 | orchestrator | Monday 16 March 2026 00:58:36 +0000 (0:00:00.579) 0:00:28.723 ********** 2026-03-16 00:59:48.991331 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:59:48.991339 | orchestrator | 2026-03-16 00:59:48.991346 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-16 00:59:48.991351 | orchestrator | Monday 16 March 2026 00:58:39 +0000 (0:00:02.838) 0:00:31.561 ********** 2026-03-16 00:59:48.991355 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:59:48.991360 | orchestrator | 2026-03-16 00:59:48.991364 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-16 00:59:48.991369 | orchestrator | Monday 16 March 2026 00:58:42 +0000 (0:00:03.064) 0:00:34.626 ********** 2026-03-16 00:59:48.991373 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:59:48.991379 | orchestrator | 2026-03-16 00:59:48.991386 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-16 00:59:48.991393 | orchestrator | Monday 16 March 2026 00:58:59 +0000 (0:00:17.073) 0:00:51.699 ********** 2026-03-16 00:59:48.991404 | orchestrator | 2026-03-16 00:59:48.991412 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-16 00:59:48.991419 | orchestrator | Monday 16 March 2026 00:58:59 +0000 (0:00:00.066) 0:00:51.766 ********** 2026-03-16 00:59:48.991426 | orchestrator | 2026-03-16 00:59:48.991432 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-16 00:59:48.991439 | orchestrator | Monday 16 March 2026 00:58:59 +0000 (0:00:00.066) 0:00:51.833 ********** 2026-03-16 00:59:48.991446 | orchestrator | 2026-03-16 00:59:48.991453 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-16 00:59:48.991460 | orchestrator | Monday 16 March 2026 00:59:00 +0000 (0:00:00.066) 0:00:51.900 ********** 2026-03-16 00:59:48.991466 | orchestrator | changed: [testbed-node-0] 2026-03-16 00:59:48.991474 | orchestrator | changed: [testbed-node-1] 2026-03-16 00:59:48.991481 | orchestrator | changed: [testbed-node-2] 2026-03-16 00:59:48.991488 | orchestrator | 2026-03-16 00:59:48.991496 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 00:59:48.991510 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-16 00:59:48.991519 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-16 00:59:48.991526 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-16 00:59:48.991535 | orchestrator | 2026-03-16 00:59:48.991540 | orchestrator | 2026-03-16 00:59:48.991544 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 00:59:48.991549 | orchestrator | Monday 16 March 2026 00:59:46 +0000 (0:00:45.987) 0:01:37.888 ********** 2026-03-16 00:59:48.991553 | orchestrator | =============================================================================== 2026-03-16 00:59:48.991566 | orchestrator | horizon : Restart horizon container ------------------------------------ 45.99s 2026-03-16 00:59:48.991571 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.07s 2026-03-16 00:59:48.991576 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.06s 2026-03-16 00:59:48.991580 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.84s 2026-03-16 00:59:48.991585 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.72s 2026-03-16 00:59:48.991590 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.16s 2026-03-16 00:59:48.991594 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.93s 2026-03-16 00:59:48.991599 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.85s 2026-03-16 00:59:48.991603 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.80s 2026-03-16 00:59:48.991608 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.67s 2026-03-16 00:59:48.991612 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.19s 2026-03-16 00:59:48.991617 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.89s 2026-03-16 00:59:48.991621 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2026-03-16 00:59:48.991626 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2026-03-16 00:59:48.991630 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.69s 2026-03-16 00:59:48.991635 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-03-16 00:59:48.991639 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2026-03-16 00:59:48.991643 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-03-16 00:59:48.991648 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2026-03-16 00:59:48.991653 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2026-03-16 00:59:48.991657 | orchestrator | 2026-03-16 00:59:48 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 00:59:48.991666 | orchestrator | 2026-03-16 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:52.022801 | orchestrator | 2026-03-16 00:59:52 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:52.023427 | orchestrator | 2026-03-16 00:59:52 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 00:59:52.023481 | orchestrator | 2026-03-16 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:55.068272 | orchestrator | 2026-03-16 00:59:55 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:55.070097 | orchestrator | 2026-03-16 00:59:55 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 00:59:55.070222 | orchestrator | 2026-03-16 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-03-16 00:59:58.129758 | orchestrator | 2026-03-16 00:59:58 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 00:59:58.131292 | orchestrator | 2026-03-16 00:59:58 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 00:59:58.131369 | orchestrator | 2026-03-16 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:01.166942 | orchestrator | 2026-03-16 01:00:01 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:01.167966 | orchestrator | 2026-03-16 01:00:01 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 01:00:01.169071 | orchestrator | 2026-03-16 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:04.216624 | orchestrator | 2026-03-16 01:00:04 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:04.218321 | orchestrator | 2026-03-16 01:00:04 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 01:00:04.218586 | orchestrator | 2026-03-16 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:07.265148 | orchestrator | 2026-03-16 01:00:07 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:07.266319 | orchestrator | 2026-03-16 01:00:07 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state STARTED 2026-03-16 01:00:07.266369 | orchestrator | 2026-03-16 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:10.316907 | orchestrator | 2026-03-16 01:00:10 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:10.317438 | orchestrator | 2026-03-16 01:00:10 | INFO  | Task 21ed3bae-82d1-4ea6-a57f-fb00577de569 is in state SUCCESS 2026-03-16 01:00:10.317689 | orchestrator | 2026-03-16 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:13.370827 | orchestrator | 2026-03-16 01:00:13 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:13.371761 | orchestrator | 2026-03-16 01:00:13 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:13.371828 | orchestrator | 2026-03-16 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:16.407549 | orchestrator | 2026-03-16 01:00:16 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:16.409480 | orchestrator | 2026-03-16 01:00:16 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:16.409543 | orchestrator | 2026-03-16 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:19.443564 | orchestrator | 2026-03-16 01:00:19 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:19.445808 | orchestrator | 2026-03-16 01:00:19 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:19.445898 | orchestrator | 2026-03-16 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:22.483256 | orchestrator | 2026-03-16 01:00:22 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:22.486856 | orchestrator | 2026-03-16 01:00:22 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:22.486909 | orchestrator | 2026-03-16 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:25.535100 | orchestrator | 2026-03-16 01:00:25 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:25.535425 | orchestrator | 2026-03-16 01:00:25 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:25.535710 | orchestrator | 2026-03-16 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:28.580602 | orchestrator | 2026-03-16 01:00:28 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:28.583140 | orchestrator | 2026-03-16 01:00:28 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:28.583195 | orchestrator | 2026-03-16 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:31.626479 | orchestrator | 2026-03-16 01:00:31 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:31.630367 | orchestrator | 2026-03-16 01:00:31 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:31.630910 | orchestrator | 2026-03-16 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:34.675905 | orchestrator | 2026-03-16 01:00:34 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:34.677672 | orchestrator | 2026-03-16 01:00:34 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:34.677739 | orchestrator | 2026-03-16 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:37.726690 | orchestrator | 2026-03-16 01:00:37 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:37.727444 | orchestrator | 2026-03-16 01:00:37 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:37.727997 | orchestrator | 2026-03-16 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:40.774398 | orchestrator | 2026-03-16 01:00:40 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:40.775704 | orchestrator | 2026-03-16 01:00:40 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:40.775896 | orchestrator | 2026-03-16 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:43.816885 | orchestrator | 2026-03-16 01:00:43 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:43.818238 | orchestrator | 2026-03-16 01:00:43 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:43.818304 | orchestrator | 2026-03-16 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:46.860917 | orchestrator | 2026-03-16 01:00:46 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:46.862248 | orchestrator | 2026-03-16 01:00:46 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:46.862297 | orchestrator | 2026-03-16 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:49.906729 | orchestrator | 2026-03-16 01:00:49 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:49.908501 | orchestrator | 2026-03-16 01:00:49 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state STARTED 2026-03-16 01:00:49.908566 | orchestrator | 2026-03-16 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:52.944919 | orchestrator | 2026-03-16 01:00:52 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:52.946473 | orchestrator | 2026-03-16 01:00:52 | INFO  | Task 670212d8-2270-47e3-8265-b7ebb97477d3 is in state SUCCESS 2026-03-16 01:00:52.947909 | orchestrator | 2026-03-16 01:00:52.947947 | orchestrator | 2026-03-16 01:00:52.947955 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-16 01:00:52.947963 | orchestrator | 2026-03-16 01:00:52.947969 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-16 01:00:52.947998 | orchestrator | Monday 16 March 2026 00:59:35 +0000 (0:00:00.163) 0:00:00.163 ********** 2026-03-16 01:00:52.948008 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-16 01:00:52.948022 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948029 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948035 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-16 01:00:52.948042 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948069 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-16 01:00:52.948076 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-16 01:00:52.948082 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-16 01:00:52.948088 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-16 01:00:52.948095 | orchestrator | 2026-03-16 01:00:52.948101 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-16 01:00:52.948108 | orchestrator | Monday 16 March 2026 00:59:40 +0000 (0:00:05.010) 0:00:05.173 ********** 2026-03-16 01:00:52.948114 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-16 01:00:52.948120 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948126 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948133 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-16 01:00:52.948143 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948153 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-16 01:00:52.948184 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-16 01:00:52.948194 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-16 01:00:52.948204 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-16 01:00:52.948215 | orchestrator | 2026-03-16 01:00:52.948225 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-16 01:00:52.948235 | orchestrator | Monday 16 March 2026 00:59:44 +0000 (0:00:04.360) 0:00:09.534 ********** 2026-03-16 01:00:52.948247 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-16 01:00:52.948254 | orchestrator | 2026-03-16 01:00:52.948260 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-16 01:00:52.948266 | orchestrator | Monday 16 March 2026 00:59:45 +0000 (0:00:01.108) 0:00:10.642 ********** 2026-03-16 01:00:52.948273 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-16 01:00:52.948279 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948286 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948391 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-16 01:00:52.948399 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948405 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-16 01:00:52.948411 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-16 01:00:52.948418 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-16 01:00:52.948424 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-16 01:00:52.948431 | orchestrator | 2026-03-16 01:00:52.948437 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-16 01:00:52.948443 | orchestrator | Monday 16 March 2026 00:59:59 +0000 (0:00:13.772) 0:00:24.415 ********** 2026-03-16 01:00:52.948449 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-16 01:00:52.948455 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-16 01:00:52.948471 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-16 01:00:52.948477 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-16 01:00:52.948495 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-16 01:00:52.948501 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-16 01:00:52.948508 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-16 01:00:52.948515 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-16 01:00:52.948523 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-16 01:00:52.948530 | orchestrator | 2026-03-16 01:00:52.948537 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-16 01:00:52.948544 | orchestrator | Monday 16 March 2026 01:00:02 +0000 (0:00:03.152) 0:00:27.568 ********** 2026-03-16 01:00:52.948552 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-16 01:00:52.948560 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948567 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948574 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-16 01:00:52.948581 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-16 01:00:52.948589 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-16 01:00:52.948595 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-16 01:00:52.948603 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-16 01:00:52.948610 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-16 01:00:52.948617 | orchestrator | 2026-03-16 01:00:52.948625 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:00:52.948632 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:00:52.948641 | orchestrator | 2026-03-16 01:00:52.948648 | orchestrator | 2026-03-16 01:00:52.948655 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:00:52.948662 | orchestrator | Monday 16 March 2026 01:00:09 +0000 (0:00:06.618) 0:00:34.186 ********** 2026-03-16 01:00:52.948670 | orchestrator | =============================================================================== 2026-03-16 01:00:52.948677 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.77s 2026-03-16 01:00:52.948685 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.62s 2026-03-16 01:00:52.948692 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.01s 2026-03-16 01:00:52.948699 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.36s 2026-03-16 01:00:52.948706 | orchestrator | Check if target directories exist --------------------------------------- 3.15s 2026-03-16 01:00:52.948714 | orchestrator | Create share directory -------------------------------------------------- 1.11s 2026-03-16 01:00:52.948721 | orchestrator | 2026-03-16 01:00:52.948728 | orchestrator | 2026-03-16 01:00:52.948735 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:00:52.948742 | orchestrator | 2026-03-16 01:00:52.948750 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:00:52.948757 | orchestrator | Monday 16 March 2026 00:58:08 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-03-16 01:00:52.948765 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:00:52.948772 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:00:52.948784 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:00:52.948792 | orchestrator | 2026-03-16 01:00:52.948799 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:00:52.948806 | orchestrator | Monday 16 March 2026 00:58:08 +0000 (0:00:00.331) 0:00:00.590 ********** 2026-03-16 01:00:52.948815 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-16 01:00:52.948822 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-16 01:00:52.948833 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-16 01:00:52.948841 | orchestrator | 2026-03-16 01:00:52.948848 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-16 01:00:52.948855 | orchestrator | 2026-03-16 01:00:52.948862 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-16 01:00:52.948870 | orchestrator | Monday 16 March 2026 00:58:08 +0000 (0:00:00.425) 0:00:01.016 ********** 2026-03-16 01:00:52.948878 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:00:52.948886 | orchestrator | 2026-03-16 01:00:52.948893 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-16 01:00:52.948900 | orchestrator | Monday 16 March 2026 00:58:09 +0000 (0:00:00.547) 0:00:01.563 ********** 2026-03-16 01:00:52.948917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.948928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.948935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.948968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949063 | orchestrator | 2026-03-16 01:00:52.949069 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-16 01:00:52.949076 | orchestrator | Monday 16 March 2026 00:58:11 +0000 (0:00:01.611) 0:00:03.175 ********** 2026-03-16 01:00:52.949082 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.949089 | orchestrator | 2026-03-16 01:00:52.949095 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-16 01:00:52.949101 | orchestrator | Monday 16 March 2026 00:58:11 +0000 (0:00:00.134) 0:00:03.309 ********** 2026-03-16 01:00:52.949107 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.949114 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.949120 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.949126 | orchestrator | 2026-03-16 01:00:52.949134 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-16 01:00:52.949145 | orchestrator | Monday 16 March 2026 00:58:11 +0000 (0:00:00.466) 0:00:03.776 ********** 2026-03-16 01:00:52.949157 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 01:00:52.949167 | orchestrator | 2026-03-16 01:00:52.949177 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-16 01:00:52.949192 | orchestrator | Monday 16 March 2026 00:58:12 +0000 (0:00:00.804) 0:00:04.580 ********** 2026-03-16 01:00:52.949204 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:00:52.949215 | orchestrator | 2026-03-16 01:00:52.949227 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-16 01:00:52.949238 | orchestrator | Monday 16 March 2026 00:58:13 +0000 (0:00:00.529) 0:00:05.109 ********** 2026-03-16 01:00:52.949256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.949265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.949279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.949286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.949338 | orchestrator | 2026-03-16 01:00:52.949344 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-16 01:00:52.949351 | orchestrator | Monday 16 March 2026 00:58:16 +0000 (0:00:03.675) 0:00:08.785 ********** 2026-03-16 01:00:52.949361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 01:00:52.949369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.949380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 01:00:52.949387 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.949395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 01:00:52.949417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.949430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 01:00:52.949440 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.949451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 01:00:52.949556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.949574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 01:00:52.949587 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.949594 | orchestrator | 2026-03-16 01:00:52.949600 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-16 01:00:52.949606 | orchestrator | Monday 16 March 2026 00:58:17 +0000 (0:00:00.530) 0:00:09.316 ********** 2026-03-16 01:00:52.949614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 01:00:52.949621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.949630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 01:00:52.949637 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.949917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 01:00:52.949938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.949945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 01:00:52.949951 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.949958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 01:00:52.949969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.950060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 01:00:52.950068 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.950074 | orchestrator | 2026-03-16 01:00:52.950081 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-16 01:00:52.950093 | orchestrator | Monday 16 March 2026 00:58:18 +0000 (0:00:00.760) 0:00:10.076 ********** 2026-03-16 01:00:52.950106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.950114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.950125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.950135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.950155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.950179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.950190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.950201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.950212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.950222 | orchestrator | 2026-03-16 01:00:52.950234 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-16 01:00:52.950245 | orchestrator | Monday 16 March 2026 00:58:21 +0000 (0:00:03.679) 0:00:13.756 ********** 2026-03-16 01:00:52.950252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.950269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.950277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.950284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.950294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.950301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.950316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.950323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.950330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.950336 | orchestrator | 2026-03-16 01:00:52.950343 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-16 01:00:52.950349 | orchestrator | Monday 16 March 2026 00:58:27 +0000 (0:00:05.911) 0:00:19.668 ********** 2026-03-16 01:00:52.950355 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:00:52.950362 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:00:52.950368 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:00:52.950374 | orchestrator | 2026-03-16 01:00:52.950380 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-16 01:00:52.950387 | orchestrator | Monday 16 March 2026 00:58:29 +0000 (0:00:01.571) 0:00:21.239 ********** 2026-03-16 01:00:52.950393 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.950399 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.950405 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.950411 | orchestrator | 2026-03-16 01:00:52.950418 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-16 01:00:52.950424 | orchestrator | Monday 16 March 2026 00:58:29 +0000 (0:00:00.559) 0:00:21.799 ********** 2026-03-16 01:00:52.950430 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.950436 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.950442 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.950448 | orchestrator | 2026-03-16 01:00:52.950455 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-16 01:00:52.950461 | orchestrator | Monday 16 March 2026 00:58:30 +0000 (0:00:00.306) 0:00:22.105 ********** 2026-03-16 01:00:52.950467 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.950473 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.950479 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.950485 | orchestrator | 2026-03-16 01:00:52.950492 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-16 01:00:52.950502 | orchestrator | Monday 16 March 2026 00:58:30 +0000 (0:00:00.537) 0:00:22.643 ********** 2026-03-16 01:00:52.950512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 01:00:52.950524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.950531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 01:00:52.950537 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.950544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 01:00:52.950551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.950568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 01:00:52.950575 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.950586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-16 01:00:52.950593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-16 01:00:52.950600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-16 01:00:52.950606 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.950612 | orchestrator | 2026-03-16 01:00:52.950618 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-16 01:00:52.950625 | orchestrator | Monday 16 March 2026 00:58:31 +0000 (0:00:00.774) 0:00:23.417 ********** 2026-03-16 01:00:52.950631 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.950637 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.950643 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.950650 | orchestrator | 2026-03-16 01:00:52.950656 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-16 01:00:52.950662 | orchestrator | Monday 16 March 2026 00:58:31 +0000 (0:00:00.309) 0:00:23.727 ********** 2026-03-16 01:00:52.950672 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-16 01:00:52.950679 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-16 01:00:52.950685 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-16 01:00:52.950691 | orchestrator | 2026-03-16 01:00:52.950697 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-16 01:00:52.950704 | orchestrator | Monday 16 March 2026 00:58:33 +0000 (0:00:01.534) 0:00:25.261 ********** 2026-03-16 01:00:52.950710 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 01:00:52.950716 | orchestrator | 2026-03-16 01:00:52.950723 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-16 01:00:52.950729 | orchestrator | Monday 16 March 2026 00:58:34 +0000 (0:00:01.230) 0:00:26.491 ********** 2026-03-16 01:00:52.950735 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.950741 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.950747 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.950753 | orchestrator | 2026-03-16 01:00:52.950763 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-16 01:00:52.950769 | orchestrator | Monday 16 March 2026 00:58:35 +0000 (0:00:00.979) 0:00:27.470 ********** 2026-03-16 01:00:52.950775 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-16 01:00:52.950781 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 01:00:52.950787 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-16 01:00:52.950794 | orchestrator | 2026-03-16 01:00:52.950800 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-16 01:00:52.950806 | orchestrator | Monday 16 March 2026 00:58:36 +0000 (0:00:01.135) 0:00:28.606 ********** 2026-03-16 01:00:52.950812 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:00:52.950819 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:00:52.950825 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:00:52.950831 | orchestrator | 2026-03-16 01:00:52.950837 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-16 01:00:52.950843 | orchestrator | Monday 16 March 2026 00:58:36 +0000 (0:00:00.339) 0:00:28.945 ********** 2026-03-16 01:00:52.950850 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-16 01:00:52.950856 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-16 01:00:52.950862 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-16 01:00:52.950868 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-16 01:00:52.950882 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-16 01:00:52.950889 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-16 01:00:52.950895 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-16 01:00:52.950901 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-16 01:00:52.950907 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-16 01:00:52.950913 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-16 01:00:52.950920 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-16 01:00:52.950926 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-16 01:00:52.950932 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-16 01:00:52.950938 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-16 01:00:52.950948 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-16 01:00:52.950954 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-16 01:00:52.950960 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-16 01:00:52.950967 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-16 01:00:52.951001 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-16 01:00:52.951008 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-16 01:00:52.951014 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-16 01:00:52.951020 | orchestrator | 2026-03-16 01:00:52.951026 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-16 01:00:52.951033 | orchestrator | Monday 16 March 2026 00:58:46 +0000 (0:00:09.232) 0:00:38.177 ********** 2026-03-16 01:00:52.951039 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-16 01:00:52.951045 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-16 01:00:52.951051 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-16 01:00:52.951058 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-16 01:00:52.951064 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-16 01:00:52.951070 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-16 01:00:52.951076 | orchestrator | 2026-03-16 01:00:52.951082 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-16 01:00:52.951089 | orchestrator | Monday 16 March 2026 00:58:48 +0000 (0:00:02.871) 0:00:41.049 ********** 2026-03-16 01:00:52.951099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.951112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.951124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-16 01:00:52.951131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.951138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.951147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-16 01:00:52.951154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.951164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.951175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-16 01:00:52.951182 | orchestrator | 2026-03-16 01:00:52.951188 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-16 01:00:52.951194 | orchestrator | Monday 16 March 2026 00:58:51 +0000 (0:00:02.471) 0:00:43.520 ********** 2026-03-16 01:00:52.951201 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.951207 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.951213 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.951219 | orchestrator | 2026-03-16 01:00:52.951225 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-16 01:00:52.951232 | orchestrator | Monday 16 March 2026 00:58:51 +0000 (0:00:00.310) 0:00:43.831 ********** 2026-03-16 01:00:52.951238 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:00:52.951244 | orchestrator | 2026-03-16 01:00:52.951250 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-16 01:00:52.951256 | orchestrator | Monday 16 March 2026 00:58:54 +0000 (0:00:02.415) 0:00:46.247 ********** 2026-03-16 01:00:52.951263 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:00:52.951269 | orchestrator | 2026-03-16 01:00:52.951275 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-16 01:00:52.951281 | orchestrator | Monday 16 March 2026 00:58:56 +0000 (0:00:02.450) 0:00:48.697 ********** 2026-03-16 01:00:52.951287 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:00:52.951294 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:00:52.951300 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:00:52.951306 | orchestrator | 2026-03-16 01:00:52.951312 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-16 01:00:52.951318 | orchestrator | Monday 16 March 2026 00:58:57 +0000 (0:00:01.032) 0:00:49.730 ********** 2026-03-16 01:00:52.951325 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:00:52.951331 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:00:52.951337 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:00:52.951343 | orchestrator | 2026-03-16 01:00:52.951349 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-16 01:00:52.951355 | orchestrator | Monday 16 March 2026 00:58:58 +0000 (0:00:00.349) 0:00:50.080 ********** 2026-03-16 01:00:52.951362 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.951368 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.951374 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.951380 | orchestrator | 2026-03-16 01:00:52.951387 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-16 01:00:52.951393 | orchestrator | Monday 16 March 2026 00:58:58 +0000 (0:00:00.371) 0:00:50.451 ********** 2026-03-16 01:00:52.951399 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:00:52.951405 | orchestrator | 2026-03-16 01:00:52.951415 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-16 01:00:52.951421 | orchestrator | Monday 16 March 2026 00:59:13 +0000 (0:00:15.280) 0:01:05.732 ********** 2026-03-16 01:00:52.951427 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:00:52.951437 | orchestrator | 2026-03-16 01:00:52.951444 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-16 01:00:52.951450 | orchestrator | Monday 16 March 2026 00:59:25 +0000 (0:00:11.640) 0:01:17.372 ********** 2026-03-16 01:00:52.951456 | orchestrator | 2026-03-16 01:00:52.951462 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-16 01:00:52.951469 | orchestrator | Monday 16 March 2026 00:59:25 +0000 (0:00:00.073) 0:01:17.446 ********** 2026-03-16 01:00:52.951475 | orchestrator | 2026-03-16 01:00:52.951481 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-16 01:00:52.951487 | orchestrator | Monday 16 March 2026 00:59:25 +0000 (0:00:00.067) 0:01:17.513 ********** 2026-03-16 01:00:52.951493 | orchestrator | 2026-03-16 01:00:52.951499 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-16 01:00:52.951506 | orchestrator | Monday 16 March 2026 00:59:25 +0000 (0:00:00.066) 0:01:17.580 ********** 2026-03-16 01:00:52.951512 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:00:52.951518 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:00:52.951524 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:00:52.951530 | orchestrator | 2026-03-16 01:00:52.951537 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-16 01:00:52.951543 | orchestrator | Monday 16 March 2026 00:59:39 +0000 (0:00:13.745) 0:01:31.325 ********** 2026-03-16 01:00:52.951549 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:00:52.951555 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:00:52.951561 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:00:52.951568 | orchestrator | 2026-03-16 01:00:52.951578 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-16 01:00:52.951584 | orchestrator | Monday 16 March 2026 00:59:48 +0000 (0:00:09.694) 0:01:41.020 ********** 2026-03-16 01:00:52.951590 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:00:52.951597 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:00:52.951603 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:00:52.951609 | orchestrator | 2026-03-16 01:00:52.951615 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-16 01:00:52.951621 | orchestrator | Monday 16 March 2026 00:59:55 +0000 (0:00:06.778) 0:01:47.798 ********** 2026-03-16 01:00:52.951628 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:00:52.951634 | orchestrator | 2026-03-16 01:00:52.951640 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-16 01:00:52.951647 | orchestrator | Monday 16 March 2026 00:59:56 +0000 (0:00:00.827) 0:01:48.625 ********** 2026-03-16 01:00:52.951653 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:00:52.951659 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:00:52.951665 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:00:52.951671 | orchestrator | 2026-03-16 01:00:52.951677 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-16 01:00:52.951684 | orchestrator | Monday 16 March 2026 00:59:57 +0000 (0:00:00.806) 0:01:49.432 ********** 2026-03-16 01:00:52.951690 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:00:52.951696 | orchestrator | 2026-03-16 01:00:52.951702 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-16 01:00:52.951708 | orchestrator | Monday 16 March 2026 00:59:59 +0000 (0:00:01.681) 0:01:51.114 ********** 2026-03-16 01:00:52.951715 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-16 01:00:52.951721 | orchestrator | 2026-03-16 01:00:52.951727 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-16 01:00:52.951733 | orchestrator | Monday 16 March 2026 01:00:11 +0000 (0:00:12.736) 0:02:03.850 ********** 2026-03-16 01:00:52.951739 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-16 01:00:52.951746 | orchestrator | 2026-03-16 01:00:52.951752 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-16 01:00:52.951763 | orchestrator | Monday 16 March 2026 01:00:40 +0000 (0:00:28.632) 0:02:32.483 ********** 2026-03-16 01:00:52.951769 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-16 01:00:52.951775 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-16 01:00:52.951781 | orchestrator | 2026-03-16 01:00:52.951787 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-16 01:00:52.951794 | orchestrator | Monday 16 March 2026 01:00:47 +0000 (0:00:07.356) 0:02:39.839 ********** 2026-03-16 01:00:52.951800 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.951806 | orchestrator | 2026-03-16 01:00:52.951812 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-16 01:00:52.951818 | orchestrator | Monday 16 March 2026 01:00:47 +0000 (0:00:00.124) 0:02:39.964 ********** 2026-03-16 01:00:52.951825 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.951831 | orchestrator | 2026-03-16 01:00:52.951837 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-16 01:00:52.951843 | orchestrator | Monday 16 March 2026 01:00:48 +0000 (0:00:00.107) 0:02:40.072 ********** 2026-03-16 01:00:52.951849 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.951855 | orchestrator | 2026-03-16 01:00:52.951861 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-16 01:00:52.951868 | orchestrator | Monday 16 March 2026 01:00:48 +0000 (0:00:00.126) 0:02:40.198 ********** 2026-03-16 01:00:52.951874 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.951880 | orchestrator | 2026-03-16 01:00:52.951886 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-16 01:00:52.951892 | orchestrator | Monday 16 March 2026 01:00:48 +0000 (0:00:00.401) 0:02:40.599 ********** 2026-03-16 01:00:52.951899 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:00:52.951905 | orchestrator | 2026-03-16 01:00:52.951911 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-16 01:00:52.951920 | orchestrator | Monday 16 March 2026 01:00:51 +0000 (0:00:03.416) 0:02:44.016 ********** 2026-03-16 01:00:52.951927 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:00:52.951933 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:00:52.951939 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:00:52.951945 | orchestrator | 2026-03-16 01:00:52.951952 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:00:52.951959 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-16 01:00:52.951966 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-16 01:00:52.951985 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-16 01:00:52.951992 | orchestrator | 2026-03-16 01:00:52.951998 | orchestrator | 2026-03-16 01:00:52.952004 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:00:52.952010 | orchestrator | Monday 16 March 2026 01:00:52 +0000 (0:00:00.396) 0:02:44.413 ********** 2026-03-16 01:00:52.952017 | orchestrator | =============================================================================== 2026-03-16 01:00:52.952023 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.63s 2026-03-16 01:00:52.952029 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.28s 2026-03-16 01:00:52.952039 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 13.75s 2026-03-16 01:00:52.952045 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.74s 2026-03-16 01:00:52.952051 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.64s 2026-03-16 01:00:52.952058 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.69s 2026-03-16 01:00:52.952068 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.23s 2026-03-16 01:00:52.952074 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.36s 2026-03-16 01:00:52.952080 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.78s 2026-03-16 01:00:52.952086 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.91s 2026-03-16 01:00:52.952092 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.68s 2026-03-16 01:00:52.952098 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.68s 2026-03-16 01:00:52.952105 | orchestrator | keystone : Creating default user role ----------------------------------- 3.42s 2026-03-16 01:00:52.952111 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.87s 2026-03-16 01:00:52.952117 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.47s 2026-03-16 01:00:52.952123 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.45s 2026-03-16 01:00:52.952130 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.42s 2026-03-16 01:00:52.952136 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.68s 2026-03-16 01:00:52.952142 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.61s 2026-03-16 01:00:52.952148 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.57s 2026-03-16 01:00:52.952154 | orchestrator | 2026-03-16 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:55.971187 | orchestrator | 2026-03-16 01:00:55 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:00:55.974500 | orchestrator | 2026-03-16 01:00:55 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:00:55.975341 | orchestrator | 2026-03-16 01:00:55 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:55.976354 | orchestrator | 2026-03-16 01:00:55 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:00:55.977228 | orchestrator | 2026-03-16 01:00:55 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:00:55.977329 | orchestrator | 2026-03-16 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:00:59.006924 | orchestrator | 2026-03-16 01:00:59 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:00:59.009079 | orchestrator | 2026-03-16 01:00:59 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:00:59.011210 | orchestrator | 2026-03-16 01:00:59 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:00:59.012411 | orchestrator | 2026-03-16 01:00:59 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:00:59.013545 | orchestrator | 2026-03-16 01:00:59 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:00:59.013819 | orchestrator | 2026-03-16 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:02.047992 | orchestrator | 2026-03-16 01:01:02 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:02.049802 | orchestrator | 2026-03-16 01:01:02 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:01:02.051587 | orchestrator | 2026-03-16 01:01:02 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:01:02.053920 | orchestrator | 2026-03-16 01:01:02 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:02.056062 | orchestrator | 2026-03-16 01:01:02 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:02.058544 | orchestrator | 2026-03-16 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:05.105385 | orchestrator | 2026-03-16 01:01:05 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:05.106852 | orchestrator | 2026-03-16 01:01:05 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:01:05.109267 | orchestrator | 2026-03-16 01:01:05 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:01:05.111192 | orchestrator | 2026-03-16 01:01:05 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:05.113023 | orchestrator | 2026-03-16 01:01:05 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:05.113193 | orchestrator | 2026-03-16 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:08.157341 | orchestrator | 2026-03-16 01:01:08 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:08.160001 | orchestrator | 2026-03-16 01:01:08 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:01:08.161243 | orchestrator | 2026-03-16 01:01:08 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state STARTED 2026-03-16 01:01:08.163107 | orchestrator | 2026-03-16 01:01:08 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:08.164655 | orchestrator | 2026-03-16 01:01:08 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:08.164704 | orchestrator | 2026-03-16 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:11.216661 | orchestrator | 2026-03-16 01:01:11 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:11.218551 | orchestrator | 2026-03-16 01:01:11 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:01:11.221958 | orchestrator | 2026-03-16 01:01:11 | INFO  | Task 74c613bb-7e9a-4a45-8b54-6d28c9eb3239 is in state SUCCESS 2026-03-16 01:01:11.223642 | orchestrator | 2026-03-16 01:01:11 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:11.224918 | orchestrator | 2026-03-16 01:01:11 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:11.226181 | orchestrator | 2026-03-16 01:01:11 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:11.226300 | orchestrator | 2026-03-16 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:14.263226 | orchestrator | 2026-03-16 01:01:14 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:14.265068 | orchestrator | 2026-03-16 01:01:14 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:01:14.267511 | orchestrator | 2026-03-16 01:01:14 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:14.269858 | orchestrator | 2026-03-16 01:01:14 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:14.271899 | orchestrator | 2026-03-16 01:01:14 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:14.271977 | orchestrator | 2026-03-16 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:17.321801 | orchestrator | 2026-03-16 01:01:17 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:17.323494 | orchestrator | 2026-03-16 01:01:17 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:01:17.324950 | orchestrator | 2026-03-16 01:01:17 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:17.326427 | orchestrator | 2026-03-16 01:01:17 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:17.328410 | orchestrator | 2026-03-16 01:01:17 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:17.328449 | orchestrator | 2026-03-16 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:20.365213 | orchestrator | 2026-03-16 01:01:20 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:20.366486 | orchestrator | 2026-03-16 01:01:20 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:01:20.368367 | orchestrator | 2026-03-16 01:01:20 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:20.371212 | orchestrator | 2026-03-16 01:01:20 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:20.373469 | orchestrator | 2026-03-16 01:01:20 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:20.373620 | orchestrator | 2026-03-16 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:23.418298 | orchestrator | 2026-03-16 01:01:23 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:23.420568 | orchestrator | 2026-03-16 01:01:23 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:01:23.422232 | orchestrator | 2026-03-16 01:01:23 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:23.423655 | orchestrator | 2026-03-16 01:01:23 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:23.424960 | orchestrator | 2026-03-16 01:01:23 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:23.425088 | orchestrator | 2026-03-16 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:26.466441 | orchestrator | 2026-03-16 01:01:26 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:26.469854 | orchestrator | 2026-03-16 01:01:26 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:01:26.471734 | orchestrator | 2026-03-16 01:01:26 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:26.474085 | orchestrator | 2026-03-16 01:01:26 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:26.476362 | orchestrator | 2026-03-16 01:01:26 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:26.476476 | orchestrator | 2026-03-16 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:29.517866 | orchestrator | 2026-03-16 01:01:29 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:29.520626 | orchestrator | 2026-03-16 01:01:29 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:01:29.522589 | orchestrator | 2026-03-16 01:01:29 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:29.524796 | orchestrator | 2026-03-16 01:01:29 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:29.526580 | orchestrator | 2026-03-16 01:01:29 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:29.527335 | orchestrator | 2026-03-16 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:32.567489 | orchestrator | 2026-03-16 01:01:32 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:32.567579 | orchestrator | 2026-03-16 01:01:32 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state STARTED 2026-03-16 01:01:32.568275 | orchestrator | 2026-03-16 01:01:32 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:32.568990 | orchestrator | 2026-03-16 01:01:32 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:32.569562 | orchestrator | 2026-03-16 01:01:32 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:32.569593 | orchestrator | 2026-03-16 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:35.595759 | orchestrator | 2026-03-16 01:01:35 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:35.595853 | orchestrator | 2026-03-16 01:01:35 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:01:35.596312 | orchestrator | 2026-03-16 01:01:35 | INFO  | Task a9bf87c9-53ab-497d-be4d-d7af3937377e is in state SUCCESS 2026-03-16 01:01:35.598490 | orchestrator | 2026-03-16 01:01:35 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:35.599398 | orchestrator | 2026-03-16 01:01:35 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:35.599563 | orchestrator | 2026-03-16 01:01:35 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:35.599580 | orchestrator | 2026-03-16 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:38.623217 | orchestrator | 2026-03-16 01:01:38 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:38.625047 | orchestrator | 2026-03-16 01:01:38 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:01:38.625678 | orchestrator | 2026-03-16 01:01:38 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:38.626177 | orchestrator | 2026-03-16 01:01:38 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:38.626947 | orchestrator | 2026-03-16 01:01:38 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:38.626970 | orchestrator | 2026-03-16 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:41.661644 | orchestrator | 2026-03-16 01:01:41 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:41.664449 | orchestrator | 2026-03-16 01:01:41 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:01:41.666221 | orchestrator | 2026-03-16 01:01:41 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:41.668113 | orchestrator | 2026-03-16 01:01:41 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:41.670604 | orchestrator | 2026-03-16 01:01:41 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:41.670673 | orchestrator | 2026-03-16 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:44.698553 | orchestrator | 2026-03-16 01:01:44 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:44.698707 | orchestrator | 2026-03-16 01:01:44 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:01:44.699639 | orchestrator | 2026-03-16 01:01:44 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:44.700632 | orchestrator | 2026-03-16 01:01:44 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:44.702120 | orchestrator | 2026-03-16 01:01:44 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:44.702161 | orchestrator | 2026-03-16 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:47.730270 | orchestrator | 2026-03-16 01:01:47 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:47.730367 | orchestrator | 2026-03-16 01:01:47 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:01:47.731204 | orchestrator | 2026-03-16 01:01:47 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:47.732046 | orchestrator | 2026-03-16 01:01:47 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:47.732925 | orchestrator | 2026-03-16 01:01:47 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:47.733733 | orchestrator | 2026-03-16 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:50.758112 | orchestrator | 2026-03-16 01:01:50 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:50.758216 | orchestrator | 2026-03-16 01:01:50 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:01:50.758601 | orchestrator | 2026-03-16 01:01:50 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:50.761642 | orchestrator | 2026-03-16 01:01:50 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:50.762204 | orchestrator | 2026-03-16 01:01:50 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:50.762247 | orchestrator | 2026-03-16 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:53.790794 | orchestrator | 2026-03-16 01:01:53 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:53.791439 | orchestrator | 2026-03-16 01:01:53 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:01:53.793077 | orchestrator | 2026-03-16 01:01:53 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:53.793773 | orchestrator | 2026-03-16 01:01:53 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:53.794765 | orchestrator | 2026-03-16 01:01:53 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:53.794804 | orchestrator | 2026-03-16 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:56.821043 | orchestrator | 2026-03-16 01:01:56 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:56.821124 | orchestrator | 2026-03-16 01:01:56 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:01:56.821565 | orchestrator | 2026-03-16 01:01:56 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:56.822007 | orchestrator | 2026-03-16 01:01:56 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:56.822753 | orchestrator | 2026-03-16 01:01:56 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:56.822772 | orchestrator | 2026-03-16 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:01:59.840120 | orchestrator | 2026-03-16 01:01:59 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:01:59.840388 | orchestrator | 2026-03-16 01:01:59 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:01:59.841173 | orchestrator | 2026-03-16 01:01:59 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:01:59.841725 | orchestrator | 2026-03-16 01:01:59 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:01:59.842539 | orchestrator | 2026-03-16 01:01:59 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:01:59.842583 | orchestrator | 2026-03-16 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:02.863176 | orchestrator | 2026-03-16 01:02:02 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:02.863603 | orchestrator | 2026-03-16 01:02:02 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:02.864160 | orchestrator | 2026-03-16 01:02:02 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:02.865364 | orchestrator | 2026-03-16 01:02:02 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:02.865899 | orchestrator | 2026-03-16 01:02:02 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:02.865922 | orchestrator | 2026-03-16 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:05.894961 | orchestrator | 2026-03-16 01:02:05 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:05.895348 | orchestrator | 2026-03-16 01:02:05 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:05.896152 | orchestrator | 2026-03-16 01:02:05 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:05.897177 | orchestrator | 2026-03-16 01:02:05 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:05.898101 | orchestrator | 2026-03-16 01:02:05 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:05.898124 | orchestrator | 2026-03-16 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:08.929806 | orchestrator | 2026-03-16 01:02:08 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:08.930259 | orchestrator | 2026-03-16 01:02:08 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:08.930749 | orchestrator | 2026-03-16 01:02:08 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:08.931591 | orchestrator | 2026-03-16 01:02:08 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:08.932189 | orchestrator | 2026-03-16 01:02:08 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:08.932222 | orchestrator | 2026-03-16 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:11.956960 | orchestrator | 2026-03-16 01:02:11 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:11.957103 | orchestrator | 2026-03-16 01:02:11 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:11.958634 | orchestrator | 2026-03-16 01:02:11 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:11.959080 | orchestrator | 2026-03-16 01:02:11 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:11.959803 | orchestrator | 2026-03-16 01:02:11 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:11.959842 | orchestrator | 2026-03-16 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:14.984232 | orchestrator | 2026-03-16 01:02:14 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:14.984317 | orchestrator | 2026-03-16 01:02:14 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:14.984669 | orchestrator | 2026-03-16 01:02:14 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:14.985337 | orchestrator | 2026-03-16 01:02:14 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:14.985799 | orchestrator | 2026-03-16 01:02:14 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:14.985834 | orchestrator | 2026-03-16 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:18.010893 | orchestrator | 2026-03-16 01:02:18 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:18.010972 | orchestrator | 2026-03-16 01:02:18 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:18.011487 | orchestrator | 2026-03-16 01:02:18 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:18.011910 | orchestrator | 2026-03-16 01:02:18 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:18.012552 | orchestrator | 2026-03-16 01:02:18 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:18.012573 | orchestrator | 2026-03-16 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:21.035358 | orchestrator | 2026-03-16 01:02:21 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:21.035635 | orchestrator | 2026-03-16 01:02:21 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:21.038516 | orchestrator | 2026-03-16 01:02:21 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:21.039551 | orchestrator | 2026-03-16 01:02:21 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:21.040670 | orchestrator | 2026-03-16 01:02:21 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:21.040721 | orchestrator | 2026-03-16 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:24.062317 | orchestrator | 2026-03-16 01:02:24 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:24.064377 | orchestrator | 2026-03-16 01:02:24 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:24.066100 | orchestrator | 2026-03-16 01:02:24 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:24.067649 | orchestrator | 2026-03-16 01:02:24 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:24.070448 | orchestrator | 2026-03-16 01:02:24 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:24.070500 | orchestrator | 2026-03-16 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:27.123646 | orchestrator | 2026-03-16 01:02:27 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:27.124087 | orchestrator | 2026-03-16 01:02:27 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:27.125412 | orchestrator | 2026-03-16 01:02:27 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:27.125799 | orchestrator | 2026-03-16 01:02:27 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:27.126871 | orchestrator | 2026-03-16 01:02:27 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:27.126917 | orchestrator | 2026-03-16 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:30.149989 | orchestrator | 2026-03-16 01:02:30 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:30.150980 | orchestrator | 2026-03-16 01:02:30 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:30.151991 | orchestrator | 2026-03-16 01:02:30 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:30.152957 | orchestrator | 2026-03-16 01:02:30 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:30.153770 | orchestrator | 2026-03-16 01:02:30 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:30.153880 | orchestrator | 2026-03-16 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:33.293203 | orchestrator | 2026-03-16 01:02:33 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:33.294788 | orchestrator | 2026-03-16 01:02:33 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:33.296263 | orchestrator | 2026-03-16 01:02:33 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:33.298455 | orchestrator | 2026-03-16 01:02:33 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:33.299591 | orchestrator | 2026-03-16 01:02:33 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:33.299706 | orchestrator | 2026-03-16 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:36.328336 | orchestrator | 2026-03-16 01:02:36 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:36.328524 | orchestrator | 2026-03-16 01:02:36 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:36.329178 | orchestrator | 2026-03-16 01:02:36 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:36.329826 | orchestrator | 2026-03-16 01:02:36 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:36.330473 | orchestrator | 2026-03-16 01:02:36 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:36.330522 | orchestrator | 2026-03-16 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:39.355199 | orchestrator | 2026-03-16 01:02:39 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:39.355316 | orchestrator | 2026-03-16 01:02:39 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:39.356047 | orchestrator | 2026-03-16 01:02:39 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:39.356428 | orchestrator | 2026-03-16 01:02:39 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:39.357290 | orchestrator | 2026-03-16 01:02:39 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:39.357331 | orchestrator | 2026-03-16 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:42.382845 | orchestrator | 2026-03-16 01:02:42 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:42.383319 | orchestrator | 2026-03-16 01:02:42 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:42.384354 | orchestrator | 2026-03-16 01:02:42 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:42.385214 | orchestrator | 2026-03-16 01:02:42 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:42.387047 | orchestrator | 2026-03-16 01:02:42 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:42.387088 | orchestrator | 2026-03-16 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:45.413236 | orchestrator | 2026-03-16 01:02:45 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:45.413843 | orchestrator | 2026-03-16 01:02:45 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:45.414658 | orchestrator | 2026-03-16 01:02:45 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:45.416640 | orchestrator | 2026-03-16 01:02:45 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:45.417519 | orchestrator | 2026-03-16 01:02:45 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:45.417553 | orchestrator | 2026-03-16 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:48.446251 | orchestrator | 2026-03-16 01:02:48 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:48.448512 | orchestrator | 2026-03-16 01:02:48 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:48.448606 | orchestrator | 2026-03-16 01:02:48 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:48.449508 | orchestrator | 2026-03-16 01:02:48 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:48.450801 | orchestrator | 2026-03-16 01:02:48 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state STARTED 2026-03-16 01:02:48.450822 | orchestrator | 2026-03-16 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:51.484144 | orchestrator | 2026-03-16 01:02:51 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:51.484264 | orchestrator | 2026-03-16 01:02:51 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:51.485880 | orchestrator | 2026-03-16 01:02:51 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:51.486362 | orchestrator | 2026-03-16 01:02:51 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:51.488045 | orchestrator | 2026-03-16 01:02:51 | INFO  | Task 4bbd8c49-19b8-4d27-98b3-e5b122f72d83 is in state SUCCESS 2026-03-16 01:02:51.488086 | orchestrator | 2026-03-16 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:51.488170 | orchestrator | 2026-03-16 01:02:51.488180 | orchestrator | 2026-03-16 01:02:51.488187 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-16 01:02:51.488194 | orchestrator | 2026-03-16 01:02:51.488201 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-16 01:02:51.488208 | orchestrator | Monday 16 March 2026 01:00:13 +0000 (0:00:00.206) 0:00:00.206 ********** 2026-03-16 01:02:51.488214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-16 01:02:51.488222 | orchestrator | 2026-03-16 01:02:51.488229 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-16 01:02:51.488235 | orchestrator | Monday 16 March 2026 01:00:13 +0000 (0:00:00.205) 0:00:00.412 ********** 2026-03-16 01:02:51.488241 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-16 01:02:51.488265 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-16 01:02:51.488274 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-16 01:02:51.488280 | orchestrator | 2026-03-16 01:02:51.488287 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-16 01:02:51.488293 | orchestrator | Monday 16 March 2026 01:00:14 +0000 (0:00:01.176) 0:00:01.588 ********** 2026-03-16 01:02:51.488299 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-16 01:02:51.488305 | orchestrator | 2026-03-16 01:02:51.488312 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-16 01:02:51.488318 | orchestrator | Monday 16 March 2026 01:00:16 +0000 (0:00:01.251) 0:00:02.840 ********** 2026-03-16 01:02:51.488325 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488332 | orchestrator | 2026-03-16 01:02:51.488348 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-16 01:02:51.488360 | orchestrator | Monday 16 March 2026 01:00:17 +0000 (0:00:00.810) 0:00:03.650 ********** 2026-03-16 01:02:51.488366 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488370 | orchestrator | 2026-03-16 01:02:51.488374 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-16 01:02:51.488379 | orchestrator | Monday 16 March 2026 01:00:17 +0000 (0:00:00.843) 0:00:04.493 ********** 2026-03-16 01:02:51.488386 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-16 01:02:51.488392 | orchestrator | ok: [testbed-manager] 2026-03-16 01:02:51.488399 | orchestrator | 2026-03-16 01:02:51.488405 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-16 01:02:51.488411 | orchestrator | Monday 16 March 2026 01:00:59 +0000 (0:00:41.897) 0:00:46.391 ********** 2026-03-16 01:02:51.488418 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-16 01:02:51.488425 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-16 01:02:51.488431 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-16 01:02:51.488438 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-16 01:02:51.488444 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-16 01:02:51.488450 | orchestrator | 2026-03-16 01:02:51.488456 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-16 01:02:51.488463 | orchestrator | Monday 16 March 2026 01:01:03 +0000 (0:00:03.644) 0:00:50.035 ********** 2026-03-16 01:02:51.488469 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-16 01:02:51.488476 | orchestrator | 2026-03-16 01:02:51.488482 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-16 01:02:51.488489 | orchestrator | Monday 16 March 2026 01:01:03 +0000 (0:00:00.483) 0:00:50.519 ********** 2026-03-16 01:02:51.488495 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:02:51.488501 | orchestrator | 2026-03-16 01:02:51.488507 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-16 01:02:51.488521 | orchestrator | Monday 16 March 2026 01:01:04 +0000 (0:00:00.135) 0:00:50.654 ********** 2026-03-16 01:02:51.488528 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:02:51.488534 | orchestrator | 2026-03-16 01:02:51.488540 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-16 01:02:51.488547 | orchestrator | Monday 16 March 2026 01:01:04 +0000 (0:00:00.484) 0:00:51.138 ********** 2026-03-16 01:02:51.488551 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488554 | orchestrator | 2026-03-16 01:02:51.488558 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-16 01:02:51.488562 | orchestrator | Monday 16 March 2026 01:01:05 +0000 (0:00:01.406) 0:00:52.546 ********** 2026-03-16 01:02:51.488566 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488570 | orchestrator | 2026-03-16 01:02:51.488573 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-16 01:02:51.488582 | orchestrator | Monday 16 March 2026 01:01:06 +0000 (0:00:00.745) 0:00:53.292 ********** 2026-03-16 01:02:51.488585 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488589 | orchestrator | 2026-03-16 01:02:51.488593 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-16 01:02:51.488597 | orchestrator | Monday 16 March 2026 01:01:07 +0000 (0:00:00.581) 0:00:53.873 ********** 2026-03-16 01:02:51.488601 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-16 01:02:51.488604 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-16 01:02:51.488609 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-16 01:02:51.488612 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-16 01:02:51.488616 | orchestrator | 2026-03-16 01:02:51.488620 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:02:51.488624 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:02:51.488628 | orchestrator | 2026-03-16 01:02:51.488632 | orchestrator | 2026-03-16 01:02:51.488643 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:02:51.488647 | orchestrator | Monday 16 March 2026 01:01:08 +0000 (0:00:01.483) 0:00:55.357 ********** 2026-03-16 01:02:51.488651 | orchestrator | =============================================================================== 2026-03-16 01:02:51.488654 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.90s 2026-03-16 01:02:51.488658 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.64s 2026-03-16 01:02:51.488662 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.48s 2026-03-16 01:02:51.488666 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.41s 2026-03-16 01:02:51.488669 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.25s 2026-03-16 01:02:51.488673 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.18s 2026-03-16 01:02:51.488715 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.84s 2026-03-16 01:02:51.488718 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.81s 2026-03-16 01:02:51.488722 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.75s 2026-03-16 01:02:51.488726 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2026-03-16 01:02:51.488729 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.48s 2026-03-16 01:02:51.488733 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-03-16 01:02:51.488737 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2026-03-16 01:02:51.488741 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-03-16 01:02:51.488744 | orchestrator | 2026-03-16 01:02:51.488748 | orchestrator | 2026-03-16 01:02:51.488752 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-16 01:02:51.488755 | orchestrator | 2026-03-16 01:02:51.488759 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-16 01:02:51.488763 | orchestrator | Monday 16 March 2026 01:00:56 +0000 (0:00:00.097) 0:00:00.097 ********** 2026-03-16 01:02:51.488767 | orchestrator | changed: [localhost] 2026-03-16 01:02:51.488771 | orchestrator | 2026-03-16 01:02:51.488774 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-16 01:02:51.488778 | orchestrator | Monday 16 March 2026 01:00:57 +0000 (0:00:01.020) 0:00:01.118 ********** 2026-03-16 01:02:51.488782 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-03-16 01:02:51.488786 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-03-16 01:02:51.488789 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-03-16 01:02:51.488797 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.initramfs", "elapsed": 0, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.initramfs.sha256"} 2026-03-16 01:02:51.488802 | orchestrator | 2026-03-16 01:02:51.488806 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:02:51.488810 | orchestrator | localhost : ok=1  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-03-16 01:02:51.488814 | orchestrator | 2026-03-16 01:02:51.488817 | orchestrator | 2026-03-16 01:02:51.488821 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:02:51.488832 | orchestrator | Monday 16 March 2026 01:01:32 +0000 (0:00:35.003) 0:00:36.121 ********** 2026-03-16 01:02:51.488836 | orchestrator | =============================================================================== 2026-03-16 01:02:51.488840 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 35.00s 2026-03-16 01:02:51.488844 | orchestrator | Ensure the destination directory exists --------------------------------- 1.02s 2026-03-16 01:02:51.488848 | orchestrator | 2026-03-16 01:02:51.488851 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-16 01:02:51.488855 | orchestrator | 2.16.14 2026-03-16 01:02:51.488859 | orchestrator | 2026-03-16 01:02:51.488863 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-16 01:02:51.488867 | orchestrator | 2026-03-16 01:02:51.488870 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-16 01:02:51.488874 | orchestrator | Monday 16 March 2026 01:01:12 +0000 (0:00:00.234) 0:00:00.234 ********** 2026-03-16 01:02:51.488878 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488882 | orchestrator | 2026-03-16 01:02:51.488886 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-16 01:02:51.488890 | orchestrator | Monday 16 March 2026 01:01:14 +0000 (0:00:01.582) 0:00:01.817 ********** 2026-03-16 01:02:51.488893 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488897 | orchestrator | 2026-03-16 01:02:51.488901 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-16 01:02:51.488905 | orchestrator | Monday 16 March 2026 01:01:15 +0000 (0:00:00.949) 0:00:02.766 ********** 2026-03-16 01:02:51.488909 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488913 | orchestrator | 2026-03-16 01:02:51.488916 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-16 01:02:51.488920 | orchestrator | Monday 16 March 2026 01:01:16 +0000 (0:00:00.948) 0:00:03.715 ********** 2026-03-16 01:02:51.488924 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488928 | orchestrator | 2026-03-16 01:02:51.488931 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-16 01:02:51.488938 | orchestrator | Monday 16 March 2026 01:01:17 +0000 (0:00:01.047) 0:00:04.762 ********** 2026-03-16 01:02:51.488942 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488946 | orchestrator | 2026-03-16 01:02:51.488950 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-16 01:02:51.488954 | orchestrator | Monday 16 March 2026 01:01:18 +0000 (0:00:00.980) 0:00:05.743 ********** 2026-03-16 01:02:51.488957 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488961 | orchestrator | 2026-03-16 01:02:51.488965 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-16 01:02:51.488969 | orchestrator | Monday 16 March 2026 01:01:19 +0000 (0:00:00.940) 0:00:06.683 ********** 2026-03-16 01:02:51.488973 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488976 | orchestrator | 2026-03-16 01:02:51.488980 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-16 01:02:51.488984 | orchestrator | Monday 16 March 2026 01:01:20 +0000 (0:00:01.154) 0:00:07.838 ********** 2026-03-16 01:02:51.488990 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.488994 | orchestrator | 2026-03-16 01:02:51.488997 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-16 01:02:51.489001 | orchestrator | Monday 16 March 2026 01:01:21 +0000 (0:00:01.074) 0:00:08.913 ********** 2026-03-16 01:02:51.489005 | orchestrator | changed: [testbed-manager] 2026-03-16 01:02:51.489009 | orchestrator | 2026-03-16 01:02:51.489013 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-16 01:02:51.489016 | orchestrator | Monday 16 March 2026 01:02:26 +0000 (0:01:04.568) 0:01:13.481 ********** 2026-03-16 01:02:51.489020 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:02:51.489024 | orchestrator | 2026-03-16 01:02:51.489028 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-16 01:02:51.489031 | orchestrator | 2026-03-16 01:02:51.489035 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-16 01:02:51.489039 | orchestrator | Monday 16 March 2026 01:02:26 +0000 (0:00:00.194) 0:01:13.676 ********** 2026-03-16 01:02:51.489043 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:02:51.489047 | orchestrator | 2026-03-16 01:02:51.489050 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-16 01:02:51.489054 | orchestrator | 2026-03-16 01:02:51.489058 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-16 01:02:51.489062 | orchestrator | Monday 16 March 2026 01:02:27 +0000 (0:00:01.550) 0:01:15.227 ********** 2026-03-16 01:02:51.489066 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:02:51.489069 | orchestrator | 2026-03-16 01:02:51.489073 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-16 01:02:51.489077 | orchestrator | 2026-03-16 01:02:51.489081 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-16 01:02:51.489084 | orchestrator | Monday 16 March 2026 01:02:39 +0000 (0:00:11.435) 0:01:26.662 ********** 2026-03-16 01:02:51.489088 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:02:51.489092 | orchestrator | 2026-03-16 01:02:51.489096 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:02:51.489099 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-16 01:02:51.489105 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:02:51.489112 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:02:51.489119 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:02:51.489127 | orchestrator | 2026-03-16 01:02:51.489133 | orchestrator | 2026-03-16 01:02:51.489140 | orchestrator | 2026-03-16 01:02:51.489150 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:02:51.489158 | orchestrator | Monday 16 March 2026 01:02:50 +0000 (0:00:11.219) 0:01:37.882 ********** 2026-03-16 01:02:51.489165 | orchestrator | =============================================================================== 2026-03-16 01:02:51.489172 | orchestrator | Create admin user ------------------------------------------------------ 64.57s 2026-03-16 01:02:51.489177 | orchestrator | Restart ceph manager service ------------------------------------------- 24.21s 2026-03-16 01:02:51.489181 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.58s 2026-03-16 01:02:51.489185 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.15s 2026-03-16 01:02:51.489190 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.07s 2026-03-16 01:02:51.489194 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.05s 2026-03-16 01:02:51.489199 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.98s 2026-03-16 01:02:51.489206 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.95s 2026-03-16 01:02:51.489211 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.95s 2026-03-16 01:02:51.489215 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.94s 2026-03-16 01:02:51.489220 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.20s 2026-03-16 01:02:54.513614 | orchestrator | 2026-03-16 01:02:54 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:54.513736 | orchestrator | 2026-03-16 01:02:54 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:54.514340 | orchestrator | 2026-03-16 01:02:54 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:54.515103 | orchestrator | 2026-03-16 01:02:54 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:54.515144 | orchestrator | 2026-03-16 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:02:57.542628 | orchestrator | 2026-03-16 01:02:57 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:02:57.543201 | orchestrator | 2026-03-16 01:02:57 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state STARTED 2026-03-16 01:02:57.543899 | orchestrator | 2026-03-16 01:02:57 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:02:57.544391 | orchestrator | 2026-03-16 01:02:57 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:02:57.544429 | orchestrator | 2026-03-16 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:00.575883 | orchestrator | 2026-03-16 01:03:00.576526 | orchestrator | 2026-03-16 01:03:00.576564 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:03:00.576573 | orchestrator | 2026-03-16 01:03:00.576581 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:03:00.576588 | orchestrator | Monday 16 March 2026 01:01:40 +0000 (0:00:00.787) 0:00:00.787 ********** 2026-03-16 01:03:00.576595 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:03:00.576602 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:03:00.576608 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:03:00.576613 | orchestrator | 2026-03-16 01:03:00.576620 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:03:00.576626 | orchestrator | Monday 16 March 2026 01:01:40 +0000 (0:00:00.816) 0:00:01.604 ********** 2026-03-16 01:03:00.576633 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-16 01:03:00.576640 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-16 01:03:00.576647 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-16 01:03:00.576665 | orchestrator | 2026-03-16 01:03:00.576672 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-16 01:03:00.576679 | orchestrator | 2026-03-16 01:03:00.576686 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-16 01:03:00.576693 | orchestrator | Monday 16 March 2026 01:01:41 +0000 (0:00:00.483) 0:00:02.087 ********** 2026-03-16 01:03:00.576701 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:03:00.576709 | orchestrator | 2026-03-16 01:03:00.576715 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-16 01:03:00.576721 | orchestrator | Monday 16 March 2026 01:01:41 +0000 (0:00:00.470) 0:00:02.558 ********** 2026-03-16 01:03:00.576728 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-16 01:03:00.576734 | orchestrator | 2026-03-16 01:03:00.576741 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-16 01:03:00.576764 | orchestrator | Monday 16 March 2026 01:01:45 +0000 (0:00:03.932) 0:00:06.490 ********** 2026-03-16 01:03:00.576771 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-16 01:03:00.576778 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-16 01:03:00.576784 | orchestrator | 2026-03-16 01:03:00.576791 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-16 01:03:00.576798 | orchestrator | Monday 16 March 2026 01:01:52 +0000 (0:00:06.844) 0:00:13.335 ********** 2026-03-16 01:03:00.576813 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-16 01:03:00.576820 | orchestrator | 2026-03-16 01:03:00.576826 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-16 01:03:00.576833 | orchestrator | Monday 16 March 2026 01:01:56 +0000 (0:00:03.671) 0:00:17.007 ********** 2026-03-16 01:03:00.576839 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-16 01:03:00.576845 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-16 01:03:00.576851 | orchestrator | 2026-03-16 01:03:00.576857 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-16 01:03:00.576864 | orchestrator | Monday 16 March 2026 01:02:00 +0000 (0:00:03.701) 0:00:20.708 ********** 2026-03-16 01:03:00.576870 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-16 01:03:00.576876 | orchestrator | 2026-03-16 01:03:00.576883 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-16 01:03:00.576889 | orchestrator | Monday 16 March 2026 01:02:03 +0000 (0:00:03.582) 0:00:24.290 ********** 2026-03-16 01:03:00.576895 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-16 01:03:00.576902 | orchestrator | 2026-03-16 01:03:00.576908 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-16 01:03:00.576915 | orchestrator | Monday 16 March 2026 01:02:08 +0000 (0:00:04.420) 0:00:28.711 ********** 2026-03-16 01:03:00.576922 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:00.576928 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:00.576935 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:00.576942 | orchestrator | 2026-03-16 01:03:00.576948 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-16 01:03:00.576954 | orchestrator | Monday 16 March 2026 01:02:08 +0000 (0:00:00.524) 0:00:29.235 ********** 2026-03-16 01:03:00.576964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.576992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577013 | orchestrator | 2026-03-16 01:03:00.577019 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-16 01:03:00.577030 | orchestrator | Monday 16 March 2026 01:02:10 +0000 (0:00:01.962) 0:00:31.197 ********** 2026-03-16 01:03:00.577037 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:00.577043 | orchestrator | 2026-03-16 01:03:00.577050 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-16 01:03:00.577057 | orchestrator | Monday 16 March 2026 01:02:10 +0000 (0:00:00.331) 0:00:31.528 ********** 2026-03-16 01:03:00.577064 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:00.577070 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:00.577077 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:00.577083 | orchestrator | 2026-03-16 01:03:00.577089 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-16 01:03:00.577095 | orchestrator | Monday 16 March 2026 01:02:11 +0000 (0:00:00.935) 0:00:32.464 ********** 2026-03-16 01:03:00.577102 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:03:00.577108 | orchestrator | 2026-03-16 01:03:00.577114 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-16 01:03:00.577120 | orchestrator | Monday 16 March 2026 01:02:12 +0000 (0:00:01.081) 0:00:33.545 ********** 2026-03-16 01:03:00.577127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577161 | orchestrator | 2026-03-16 01:03:00.577168 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-16 01:03:00.577175 | orchestrator | Monday 16 March 2026 01:02:14 +0000 (0:00:01.862) 0:00:35.408 ********** 2026-03-16 01:03:00.577183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 01:03:00.577190 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:00.577196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 01:03:00.577202 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:00.577214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 01:03:00.577226 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:00.577232 | orchestrator | 2026-03-16 01:03:00.577239 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-16 01:03:00.577246 | orchestrator | Monday 16 March 2026 01:02:15 +0000 (0:00:01.212) 0:00:36.621 ********** 2026-03-16 01:03:00.577253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 01:03:00.577259 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:00.577269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 01:03:00.577276 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:00.577283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 01:03:00.577290 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:00.577296 | orchestrator | 2026-03-16 01:03:00.577302 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-16 01:03:00.577309 | orchestrator | Monday 16 March 2026 01:02:16 +0000 (0:00:00.956) 0:00:37.577 ********** 2026-03-16 01:03:00.577326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577370 | orchestrator | 2026-03-16 01:03:00.577377 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-16 01:03:00.577384 | orchestrator | Monday 16 March 2026 01:02:18 +0000 (0:00:01.908) 0:00:39.486 ********** 2026-03-16 01:03:00.577391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577422 | orchestrator | 2026-03-16 01:03:00.577429 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-16 01:03:00.577435 | orchestrator | Monday 16 March 2026 01:02:22 +0000 (0:00:03.654) 0:00:43.141 ********** 2026-03-16 01:03:00.577441 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-16 01:03:00.577449 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-16 01:03:00.577456 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-16 01:03:00.577463 | orchestrator | 2026-03-16 01:03:00.577470 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-16 01:03:00.577477 | orchestrator | Monday 16 March 2026 01:02:24 +0000 (0:00:01.603) 0:00:44.744 ********** 2026-03-16 01:03:00.577484 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:00.577491 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:00.577497 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:00.577503 | orchestrator | 2026-03-16 01:03:00.577510 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-16 01:03:00.577517 | orchestrator | Monday 16 March 2026 01:02:25 +0000 (0:00:01.506) 0:00:46.251 ********** 2026-03-16 01:03:00.577527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 01:03:00.577538 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:00.577545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 01:03:00.577552 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:00.577564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-16 01:03:00.577572 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:00.577579 | orchestrator | 2026-03-16 01:03:00.577586 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-16 01:03:00.577593 | orchestrator | Monday 16 March 2026 01:02:26 +0000 (0:00:01.052) 0:00:47.303 ********** 2026-03-16 01:03:00.577601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:00.577633 | orchestrator | 2026-03-16 01:03:00.577640 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-16 01:03:00.577648 | orchestrator | Monday 16 March 2026 01:02:28 +0000 (0:00:01.491) 0:00:48.796 ********** 2026-03-16 01:03:00.577672 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:00.577678 | orchestrator | 2026-03-16 01:03:00.577685 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-16 01:03:00.577691 | orchestrator | Monday 16 March 2026 01:02:31 +0000 (0:00:03.730) 0:00:52.526 ********** 2026-03-16 01:03:00.577698 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:00.577705 | orchestrator | 2026-03-16 01:03:00.577711 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-16 01:03:00.577718 | orchestrator | Monday 16 March 2026 01:02:34 +0000 (0:00:02.938) 0:00:55.464 ********** 2026-03-16 01:03:00.577731 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:00.577738 | orchestrator | 2026-03-16 01:03:00.577744 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-16 01:03:00.577751 | orchestrator | Monday 16 March 2026 01:02:49 +0000 (0:00:14.356) 0:01:09.820 ********** 2026-03-16 01:03:00.577758 | orchestrator | 2026-03-16 01:03:00.577765 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-16 01:03:00.577823 | orchestrator | Monday 16 March 2026 01:02:49 +0000 (0:00:00.067) 0:01:09.888 ********** 2026-03-16 01:03:00.577834 | orchestrator | 2026-03-16 01:03:00.577841 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-16 01:03:00.577848 | orchestrator | Monday 16 March 2026 01:02:49 +0000 (0:00:00.067) 0:01:09.955 ********** 2026-03-16 01:03:00.577854 | orchestrator | 2026-03-16 01:03:00.577861 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-16 01:03:00.577868 | orchestrator | Monday 16 March 2026 01:02:49 +0000 (0:00:00.067) 0:01:10.023 ********** 2026-03-16 01:03:00.577874 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:00.577881 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:00.577888 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:00.577895 | orchestrator | 2026-03-16 01:03:00.577902 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:03:00.577910 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 01:03:00.577918 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-16 01:03:00.577926 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-16 01:03:00.577934 | orchestrator | 2026-03-16 01:03:00.577947 | orchestrator | 2026-03-16 01:03:00.577954 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:03:00.577960 | orchestrator | Monday 16 March 2026 01:02:59 +0000 (0:00:10.156) 0:01:20.179 ********** 2026-03-16 01:03:00.577966 | orchestrator | =============================================================================== 2026-03-16 01:03:00.577973 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.36s 2026-03-16 01:03:00.577980 | orchestrator | placement : Restart placement-api container ---------------------------- 10.16s 2026-03-16 01:03:00.577986 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.84s 2026-03-16 01:03:00.577997 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.42s 2026-03-16 01:03:00.578004 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.93s 2026-03-16 01:03:00.578010 | orchestrator | placement : Creating placement databases -------------------------------- 3.73s 2026-03-16 01:03:00.578081 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.70s 2026-03-16 01:03:00.578090 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.67s 2026-03-16 01:03:00.578097 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.65s 2026-03-16 01:03:00.578104 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.58s 2026-03-16 01:03:00.578110 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.94s 2026-03-16 01:03:00.578117 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.96s 2026-03-16 01:03:00.578123 | orchestrator | placement : Copying over config.json files for services ----------------- 1.91s 2026-03-16 01:03:00.578131 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.86s 2026-03-16 01:03:00.578138 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.60s 2026-03-16 01:03:00.578144 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.51s 2026-03-16 01:03:00.578150 | orchestrator | placement : Check placement containers ---------------------------------- 1.49s 2026-03-16 01:03:00.578171 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.21s 2026-03-16 01:03:00.578178 | orchestrator | placement : include_tasks ----------------------------------------------- 1.08s 2026-03-16 01:03:00.578184 | orchestrator | placement : Copying over existing policy file --------------------------- 1.05s 2026-03-16 01:03:00.578191 | orchestrator | 2026-03-16 01:03:00 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:00.578198 | orchestrator | 2026-03-16 01:03:00 | INFO  | Task deb6392b-524f-425c-81ee-bf9c2dab0939 is in state SUCCESS 2026-03-16 01:03:00.578205 | orchestrator | 2026-03-16 01:03:00 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state STARTED 2026-03-16 01:03:00.578211 | orchestrator | 2026-03-16 01:03:00 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:00.578218 | orchestrator | 2026-03-16 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:03.614429 | orchestrator | 2026-03-16 01:03:03 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:03.617443 | orchestrator | 2026-03-16 01:03:03 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:03.621299 | orchestrator | 2026-03-16 01:03:03 | INFO  | Task 6ec49161-1e2c-41c9-a594-91e2dce49c6c is in state SUCCESS 2026-03-16 01:03:03.623550 | orchestrator | 2026-03-16 01:03:03.623609 | orchestrator | 2026-03-16 01:03:03.623617 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:03:03.623623 | orchestrator | 2026-03-16 01:03:03.623630 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:03:03.623634 | orchestrator | Monday 16 March 2026 01:00:56 +0000 (0:00:00.291) 0:00:00.291 ********** 2026-03-16 01:03:03.623663 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:03:03.623668 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:03:03.623671 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:03:03.623675 | orchestrator | 2026-03-16 01:03:03.623678 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:03:03.623681 | orchestrator | Monday 16 March 2026 01:00:57 +0000 (0:00:00.397) 0:00:00.688 ********** 2026-03-16 01:03:03.623684 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-16 01:03:03.623688 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-16 01:03:03.623691 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-16 01:03:03.623694 | orchestrator | 2026-03-16 01:03:03.623697 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-16 01:03:03.623700 | orchestrator | 2026-03-16 01:03:03.623703 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-16 01:03:03.623707 | orchestrator | Monday 16 March 2026 01:00:57 +0000 (0:00:00.508) 0:00:01.196 ********** 2026-03-16 01:03:03.623710 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:03:03.623715 | orchestrator | 2026-03-16 01:03:03.623718 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-16 01:03:03.623721 | orchestrator | Monday 16 March 2026 01:00:58 +0000 (0:00:00.462) 0:00:01.659 ********** 2026-03-16 01:03:03.623725 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-16 01:03:03.623728 | orchestrator | 2026-03-16 01:03:03.623731 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-16 01:03:03.623734 | orchestrator | Monday 16 March 2026 01:01:02 +0000 (0:00:04.469) 0:00:06.129 ********** 2026-03-16 01:03:03.623737 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-16 01:03:03.623741 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-16 01:03:03.623744 | orchestrator | 2026-03-16 01:03:03.623747 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-16 01:03:03.623757 | orchestrator | Monday 16 March 2026 01:01:09 +0000 (0:00:07.108) 0:00:13.237 ********** 2026-03-16 01:03:03.623761 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-16 01:03:03.623764 | orchestrator | 2026-03-16 01:03:03.623767 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-16 01:03:03.623827 | orchestrator | Monday 16 March 2026 01:01:13 +0000 (0:00:03.890) 0:00:17.128 ********** 2026-03-16 01:03:03.623832 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-16 01:03:03.623835 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-16 01:03:03.623838 | orchestrator | 2026-03-16 01:03:03.623842 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-16 01:03:03.623845 | orchestrator | Monday 16 March 2026 01:01:17 +0000 (0:00:04.020) 0:00:21.148 ********** 2026-03-16 01:03:03.623848 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-16 01:03:03.623851 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-16 01:03:03.623854 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-16 01:03:03.623858 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-16 01:03:03.623861 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-16 01:03:03.623864 | orchestrator | 2026-03-16 01:03:03.623867 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-16 01:03:03.623870 | orchestrator | Monday 16 March 2026 01:01:35 +0000 (0:00:17.752) 0:00:38.900 ********** 2026-03-16 01:03:03.623873 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-16 01:03:03.623876 | orchestrator | 2026-03-16 01:03:03.623879 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-16 01:03:03.623886 | orchestrator | Monday 16 March 2026 01:01:39 +0000 (0:00:03.756) 0:00:42.657 ********** 2026-03-16 01:03:03.623896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.623910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.623914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.623920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.623925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.623931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.623938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.623942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.623945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.623949 | orchestrator | 2026-03-16 01:03:03.623952 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-16 01:03:03.623955 | orchestrator | Monday 16 March 2026 01:01:42 +0000 (0:00:02.701) 0:00:45.359 ********** 2026-03-16 01:03:03.623958 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-16 01:03:03.623961 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-16 01:03:03.623966 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-16 01:03:03.623969 | orchestrator | 2026-03-16 01:03:03.623972 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-16 01:03:03.623976 | orchestrator | Monday 16 March 2026 01:01:43 +0000 (0:00:01.000) 0:00:46.359 ********** 2026-03-16 01:03:03.623979 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:03.623982 | orchestrator | 2026-03-16 01:03:03.623985 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-16 01:03:03.623988 | orchestrator | Monday 16 March 2026 01:01:43 +0000 (0:00:00.104) 0:00:46.464 ********** 2026-03-16 01:03:03.623992 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:03.623997 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:03.624001 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:03.624004 | orchestrator | 2026-03-16 01:03:03.624007 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-16 01:03:03.624010 | orchestrator | Monday 16 March 2026 01:01:43 +0000 (0:00:00.368) 0:00:46.832 ********** 2026-03-16 01:03:03.624013 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:03:03.624016 | orchestrator | 2026-03-16 01:03:03.624019 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-16 01:03:03.624023 | orchestrator | Monday 16 March 2026 01:01:43 +0000 (0:00:00.504) 0:00:47.337 ********** 2026-03-16 01:03:03.624026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624067 | orchestrator | 2026-03-16 01:03:03.624070 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-16 01:03:03.624073 | orchestrator | Monday 16 March 2026 01:01:48 +0000 (0:00:04.015) 0:00:51.352 ********** 2026-03-16 01:03:03.624078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 01:03:03.624084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624090 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:03.624096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 01:03:03.624100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 01:03:03.624105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624120 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:03.624123 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:03.624127 | orchestrator | 2026-03-16 01:03:03.624133 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-16 01:03:03.624137 | orchestrator | Monday 16 March 2026 01:01:49 +0000 (0:00:01.774) 0:00:53.127 ********** 2026-03-16 01:03:03.624141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 01:03:03.624145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624159 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:03.624163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 01:03:03.624167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624178 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:03.624182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 01:03:03.624190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624197 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:03.624201 | orchestrator | 2026-03-16 01:03:03.624205 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-16 01:03:03.624209 | orchestrator | Monday 16 March 2026 01:01:51 +0000 (0:00:01.462) 0:00:54.590 ********** 2026-03-16 01:03:03.624213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624371 | orchestrator | 2026-03-16 01:03:03.624375 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-16 01:03:03.624379 | orchestrator | Monday 16 March 2026 01:01:55 +0000 (0:00:04.693) 0:00:59.283 ********** 2026-03-16 01:03:03.624383 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:03.624387 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:03.624391 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:03.624394 | orchestrator | 2026-03-16 01:03:03.624397 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-16 01:03:03.624405 | orchestrator | Monday 16 March 2026 01:01:58 +0000 (0:00:02.312) 0:01:01.595 ********** 2026-03-16 01:03:03.624410 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 01:03:03.624415 | orchestrator | 2026-03-16 01:03:03.624420 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-16 01:03:03.624426 | orchestrator | Monday 16 March 2026 01:01:59 +0000 (0:00:01.542) 0:01:03.138 ********** 2026-03-16 01:03:03.624432 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:03.624437 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:03.624442 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:03.624447 | orchestrator | 2026-03-16 01:03:03.624452 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-16 01:03:03.624458 | orchestrator | Monday 16 March 2026 01:02:00 +0000 (0:00:00.896) 0:01:04.034 ********** 2026-03-16 01:03:03.624463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624529 | orchestrator | 2026-03-16 01:03:03.624534 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-16 01:03:03.624539 | orchestrator | Monday 16 March 2026 01:02:12 +0000 (0:00:11.539) 0:01:15.574 ********** 2026-03-16 01:03:03.624547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 01:03:03.624551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624558 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:03.624564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 01:03:03.624575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624586 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:03.624594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-16 01:03:03.624599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:03.624613 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:03.624619 | orchestrator | 2026-03-16 01:03:03.624623 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-16 01:03:03.624626 | orchestrator | Monday 16 March 2026 01:02:13 +0000 (0:00:00.837) 0:01:16.411 ********** 2026-03-16 01:03:03.624632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-16 01:03:03.624657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:03.624687 | orchestrator | 2026-03-16 01:03:03.624690 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-16 01:03:03.624693 | orchestrator | Monday 16 March 2026 01:02:16 +0000 (0:00:03.309) 0:01:19.721 ********** 2026-03-16 01:03:03.624696 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:03.624699 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:03.624703 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:03.624706 | orchestrator | 2026-03-16 01:03:03.624709 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-16 01:03:03.624712 | orchestrator | Monday 16 March 2026 01:02:16 +0000 (0:00:00.347) 0:01:20.068 ********** 2026-03-16 01:03:03.624715 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:03.624718 | orchestrator | 2026-03-16 01:03:03.624721 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-16 01:03:03.624729 | orchestrator | Monday 16 March 2026 01:02:19 +0000 (0:00:02.476) 0:01:22.545 ********** 2026-03-16 01:03:03.624732 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:03.624736 | orchestrator | 2026-03-16 01:03:03.624739 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-16 01:03:03.624742 | orchestrator | Monday 16 March 2026 01:02:22 +0000 (0:00:02.961) 0:01:25.506 ********** 2026-03-16 01:03:03.624745 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:03.624748 | orchestrator | 2026-03-16 01:03:03.624751 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-16 01:03:03.624755 | orchestrator | Monday 16 March 2026 01:02:34 +0000 (0:00:11.960) 0:01:37.467 ********** 2026-03-16 01:03:03.624758 | orchestrator | 2026-03-16 01:03:03.624761 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-16 01:03:03.624764 | orchestrator | Monday 16 March 2026 01:02:34 +0000 (0:00:00.072) 0:01:37.539 ********** 2026-03-16 01:03:03.624767 | orchestrator | 2026-03-16 01:03:03.624770 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-16 01:03:03.624773 | orchestrator | Monday 16 March 2026 01:02:34 +0000 (0:00:00.059) 0:01:37.599 ********** 2026-03-16 01:03:03.624777 | orchestrator | 2026-03-16 01:03:03.624780 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-16 01:03:03.624783 | orchestrator | Monday 16 March 2026 01:02:34 +0000 (0:00:00.067) 0:01:37.666 ********** 2026-03-16 01:03:03.624786 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:03.624789 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:03.624792 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:03.624795 | orchestrator | 2026-03-16 01:03:03.624798 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-16 01:03:03.624802 | orchestrator | Monday 16 March 2026 01:02:46 +0000 (0:00:12.168) 0:01:49.835 ********** 2026-03-16 01:03:03.624805 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:03.624808 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:03.624813 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:03.624816 | orchestrator | 2026-03-16 01:03:03.624819 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-16 01:03:03.624823 | orchestrator | Monday 16 March 2026 01:02:55 +0000 (0:00:08.632) 0:01:58.467 ********** 2026-03-16 01:03:03.624826 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:03.624829 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:03.624832 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:03.624835 | orchestrator | 2026-03-16 01:03:03.624838 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:03:03.624842 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 01:03:03.624846 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 01:03:03.624849 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 01:03:03.624853 | orchestrator | 2026-03-16 01:03:03.624856 | orchestrator | 2026-03-16 01:03:03.624859 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:03:03.624862 | orchestrator | Monday 16 March 2026 01:03:02 +0000 (0:00:06.910) 0:02:05.378 ********** 2026-03-16 01:03:03.624865 | orchestrator | =============================================================================== 2026-03-16 01:03:03.624868 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.75s 2026-03-16 01:03:03.624872 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.17s 2026-03-16 01:03:03.624875 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.96s 2026-03-16 01:03:03.624880 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.54s 2026-03-16 01:03:03.624883 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.63s 2026-03-16 01:03:03.624886 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.11s 2026-03-16 01:03:03.624890 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.91s 2026-03-16 01:03:03.624893 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.69s 2026-03-16 01:03:03.624897 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.47s 2026-03-16 01:03:03.624900 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.02s 2026-03-16 01:03:03.624904 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.02s 2026-03-16 01:03:03.624907 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.89s 2026-03-16 01:03:03.624910 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.76s 2026-03-16 01:03:03.624913 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.31s 2026-03-16 01:03:03.624916 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.96s 2026-03-16 01:03:03.624919 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.70s 2026-03-16 01:03:03.624922 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.48s 2026-03-16 01:03:03.624926 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.31s 2026-03-16 01:03:03.624929 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.77s 2026-03-16 01:03:03.624932 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.54s 2026-03-16 01:03:03.624935 | orchestrator | 2026-03-16 01:03:03 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:03.626654 | orchestrator | 2026-03-16 01:03:03 | INFO  | Task 03c72f82-fbc0-440e-a92b-b734bfffbb8e is in state STARTED 2026-03-16 01:03:03.626701 | orchestrator | 2026-03-16 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:06.648167 | orchestrator | 2026-03-16 01:03:06 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:06.650411 | orchestrator | 2026-03-16 01:03:06 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:06.650879 | orchestrator | 2026-03-16 01:03:06 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:06.651468 | orchestrator | 2026-03-16 01:03:06 | INFO  | Task 03c72f82-fbc0-440e-a92b-b734bfffbb8e is in state STARTED 2026-03-16 01:03:06.651484 | orchestrator | 2026-03-16 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:09.673988 | orchestrator | 2026-03-16 01:03:09 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:09.674507 | orchestrator | 2026-03-16 01:03:09 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:09.675351 | orchestrator | 2026-03-16 01:03:09 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:09.675803 | orchestrator | 2026-03-16 01:03:09 | INFO  | Task 03c72f82-fbc0-440e-a92b-b734bfffbb8e is in state STARTED 2026-03-16 01:03:09.675818 | orchestrator | 2026-03-16 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:12.693222 | orchestrator | 2026-03-16 01:03:12 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:12.693510 | orchestrator | 2026-03-16 01:03:12 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:12.695498 | orchestrator | 2026-03-16 01:03:12 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:12.696138 | orchestrator | 2026-03-16 01:03:12 | INFO  | Task 03c72f82-fbc0-440e-a92b-b734bfffbb8e is in state SUCCESS 2026-03-16 01:03:12.696167 | orchestrator | 2026-03-16 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:15.724799 | orchestrator | 2026-03-16 01:03:15 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:15.724976 | orchestrator | 2026-03-16 01:03:15 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:15.725479 | orchestrator | 2026-03-16 01:03:15 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:15.726184 | orchestrator | 2026-03-16 01:03:15 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:15.726203 | orchestrator | 2026-03-16 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:18.761710 | orchestrator | 2026-03-16 01:03:18 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:18.762457 | orchestrator | 2026-03-16 01:03:18 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:18.766906 | orchestrator | 2026-03-16 01:03:18 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:18.769243 | orchestrator | 2026-03-16 01:03:18 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:18.769779 | orchestrator | 2026-03-16 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:21.797853 | orchestrator | 2026-03-16 01:03:21 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:21.798108 | orchestrator | 2026-03-16 01:03:21 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:21.799564 | orchestrator | 2026-03-16 01:03:21 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:21.800297 | orchestrator | 2026-03-16 01:03:21 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:21.800996 | orchestrator | 2026-03-16 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:24.835207 | orchestrator | 2026-03-16 01:03:24 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:24.836250 | orchestrator | 2026-03-16 01:03:24 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:24.837076 | orchestrator | 2026-03-16 01:03:24 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:24.839732 | orchestrator | 2026-03-16 01:03:24 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:24.839777 | orchestrator | 2026-03-16 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:27.906364 | orchestrator | 2026-03-16 01:03:27 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:27.907169 | orchestrator | 2026-03-16 01:03:27 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:27.908131 | orchestrator | 2026-03-16 01:03:27 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:27.908656 | orchestrator | 2026-03-16 01:03:27 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:27.908714 | orchestrator | 2026-03-16 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:30.954733 | orchestrator | 2026-03-16 01:03:30 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:30.955772 | orchestrator | 2026-03-16 01:03:30 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:30.958111 | orchestrator | 2026-03-16 01:03:30 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:30.958927 | orchestrator | 2026-03-16 01:03:30 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:30.958964 | orchestrator | 2026-03-16 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:33.987184 | orchestrator | 2026-03-16 01:03:33 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:33.987568 | orchestrator | 2026-03-16 01:03:33 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:33.989571 | orchestrator | 2026-03-16 01:03:33 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:33.990133 | orchestrator | 2026-03-16 01:03:33 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:33.990182 | orchestrator | 2026-03-16 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:37.039470 | orchestrator | 2026-03-16 01:03:37 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:37.040788 | orchestrator | 2026-03-16 01:03:37 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:37.043546 | orchestrator | 2026-03-16 01:03:37 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:37.044591 | orchestrator | 2026-03-16 01:03:37 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:37.045358 | orchestrator | 2026-03-16 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:40.092208 | orchestrator | 2026-03-16 01:03:40 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:40.095469 | orchestrator | 2026-03-16 01:03:40 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:40.097988 | orchestrator | 2026-03-16 01:03:40 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:40.101278 | orchestrator | 2026-03-16 01:03:40 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:40.101318 | orchestrator | 2026-03-16 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:43.133388 | orchestrator | 2026-03-16 01:03:43 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:43.155388 | orchestrator | 2026-03-16 01:03:43 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:43.155435 | orchestrator | 2026-03-16 01:03:43 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state STARTED 2026-03-16 01:03:43.155440 | orchestrator | 2026-03-16 01:03:43 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:43.155444 | orchestrator | 2026-03-16 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:46.162764 | orchestrator | 2026-03-16 01:03:46 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:46.162906 | orchestrator | 2026-03-16 01:03:46 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:03:46.163355 | orchestrator | 2026-03-16 01:03:46 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:46.164859 | orchestrator | 2026-03-16 01:03:46 | INFO  | Task 54be18b4-d159-468c-a2c7-ee804792ad99 is in state SUCCESS 2026-03-16 01:03:46.166306 | orchestrator | 2026-03-16 01:03:46.166368 | orchestrator | 2026-03-16 01:03:46.166379 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:03:46.166390 | orchestrator | 2026-03-16 01:03:46.166400 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:03:46.166466 | orchestrator | Monday 16 March 2026 01:03:10 +0000 (0:00:00.198) 0:00:00.198 ********** 2026-03-16 01:03:46.166517 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:03:46.166536 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:03:46.166633 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:03:46.166644 | orchestrator | 2026-03-16 01:03:46.166654 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:03:46.166664 | orchestrator | Monday 16 March 2026 01:03:10 +0000 (0:00:00.267) 0:00:00.465 ********** 2026-03-16 01:03:46.166674 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-16 01:03:46.166684 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-16 01:03:46.166694 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-16 01:03:46.166703 | orchestrator | 2026-03-16 01:03:46.166713 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-16 01:03:46.166723 | orchestrator | 2026-03-16 01:03:46.166733 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-16 01:03:46.166743 | orchestrator | Monday 16 March 2026 01:03:11 +0000 (0:00:00.501) 0:00:00.967 ********** 2026-03-16 01:03:46.166753 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:03:46.166762 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:03:46.166772 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:03:46.166808 | orchestrator | 2026-03-16 01:03:46.166890 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:03:46.166902 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:03:46.166914 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:03:46.166926 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:03:46.166937 | orchestrator | 2026-03-16 01:03:46.166948 | orchestrator | 2026-03-16 01:03:46.166959 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:03:46.166970 | orchestrator | Monday 16 March 2026 01:03:11 +0000 (0:00:00.636) 0:00:01.603 ********** 2026-03-16 01:03:46.166986 | orchestrator | =============================================================================== 2026-03-16 01:03:46.167003 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.64s 2026-03-16 01:03:46.167021 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-03-16 01:03:46.167102 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-03-16 01:03:46.167115 | orchestrator | 2026-03-16 01:03:46.167126 | orchestrator | 2026-03-16 01:03:46.167137 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:03:46.167147 | orchestrator | 2026-03-16 01:03:46.167164 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:03:46.167179 | orchestrator | Monday 16 March 2026 01:00:57 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-03-16 01:03:46.167194 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:03:46.167209 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:03:46.167225 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:03:46.167241 | orchestrator | 2026-03-16 01:03:46.167257 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:03:46.167312 | orchestrator | Monday 16 March 2026 01:00:57 +0000 (0:00:00.338) 0:00:00.598 ********** 2026-03-16 01:03:46.167331 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-16 01:03:46.167349 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-16 01:03:46.167368 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-16 01:03:46.167438 | orchestrator | 2026-03-16 01:03:46.167501 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-16 01:03:46.167511 | orchestrator | 2026-03-16 01:03:46.167521 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-16 01:03:46.167531 | orchestrator | Monday 16 March 2026 01:00:57 +0000 (0:00:00.529) 0:00:01.128 ********** 2026-03-16 01:03:46.167541 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:03:46.167626 | orchestrator | 2026-03-16 01:03:46.167649 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-16 01:03:46.167659 | orchestrator | Monday 16 March 2026 01:00:58 +0000 (0:00:00.466) 0:00:01.594 ********** 2026-03-16 01:03:46.167669 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-16 01:03:46.167678 | orchestrator | 2026-03-16 01:03:46.167688 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-16 01:03:46.167697 | orchestrator | Monday 16 March 2026 01:01:02 +0000 (0:00:04.370) 0:00:05.964 ********** 2026-03-16 01:03:46.167707 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-16 01:03:46.167717 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-16 01:03:46.167726 | orchestrator | 2026-03-16 01:03:46.167736 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-16 01:03:46.167767 | orchestrator | Monday 16 March 2026 01:01:10 +0000 (0:00:07.581) 0:00:13.545 ********** 2026-03-16 01:03:46.167779 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-16 01:03:46.167789 | orchestrator | 2026-03-16 01:03:46.167799 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-16 01:03:46.167808 | orchestrator | Monday 16 March 2026 01:01:13 +0000 (0:00:03.144) 0:00:16.690 ********** 2026-03-16 01:03:46.167834 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-16 01:03:46.167845 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-16 01:03:46.167854 | orchestrator | 2026-03-16 01:03:46.167864 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-16 01:03:46.167874 | orchestrator | Monday 16 March 2026 01:01:17 +0000 (0:00:04.270) 0:00:20.960 ********** 2026-03-16 01:03:46.167883 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-16 01:03:46.167893 | orchestrator | 2026-03-16 01:03:46.167903 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-16 01:03:46.167912 | orchestrator | Monday 16 March 2026 01:01:21 +0000 (0:00:03.750) 0:00:24.711 ********** 2026-03-16 01:03:46.167935 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-16 01:03:46.167954 | orchestrator | 2026-03-16 01:03:46.167964 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-16 01:03:46.167973 | orchestrator | Monday 16 March 2026 01:01:25 +0000 (0:00:04.092) 0:00:28.803 ********** 2026-03-16 01:03:46.167986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.168007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.168023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.168033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168360 | orchestrator | 2026-03-16 01:03:46.168376 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-16 01:03:46.168391 | orchestrator | Monday 16 March 2026 01:01:28 +0000 (0:00:03.023) 0:00:31.827 ********** 2026-03-16 01:03:46.168407 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:46.168423 | orchestrator | 2026-03-16 01:03:46.168439 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-16 01:03:46.168456 | orchestrator | Monday 16 March 2026 01:01:28 +0000 (0:00:00.128) 0:00:31.956 ********** 2026-03-16 01:03:46.168470 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:46.168480 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:46.168489 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:46.168506 | orchestrator | 2026-03-16 01:03:46.168515 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-16 01:03:46.168525 | orchestrator | Monday 16 March 2026 01:01:29 +0000 (0:00:00.274) 0:00:32.231 ********** 2026-03-16 01:03:46.168535 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:03:46.168577 | orchestrator | 2026-03-16 01:03:46.168589 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-16 01:03:46.168599 | orchestrator | Monday 16 March 2026 01:01:29 +0000 (0:00:00.707) 0:00:32.939 ********** 2026-03-16 01:03:46.168610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.168621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.168631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.168676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.168879 | orchestrator | 2026-03-16 01:03:46.168889 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-16 01:03:46.168899 | orchestrator | Monday 16 March 2026 01:01:36 +0000 (0:00:06.391) 0:00:39.330 ********** 2026-03-16 01:03:46.168909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.168919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 01:03:46.168933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.168943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.169759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.169817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.169835 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:46.169854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.169870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 01:03:46.169892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.169907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.169941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.169956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.169970 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:46.169984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.169999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 01:03:46.170046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170096 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:46.170104 | orchestrator | 2026-03-16 01:03:46.170112 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-16 01:03:46.170121 | orchestrator | Monday 16 March 2026 01:01:37 +0000 (0:00:01.533) 0:00:40.864 ********** 2026-03-16 01:03:46.170130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.170138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 01:03:46.170150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170192 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:46.170200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.170208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 01:03:46.170220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170264 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:46.170273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.170281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 01:03:46.170292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.170339 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:46.170348 | orchestrator | 2026-03-16 01:03:46.170357 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-16 01:03:46.170366 | orchestrator | Monday 16 March 2026 01:01:39 +0000 (0:00:01.924) 0:00:42.788 ********** 2026-03-16 01:03:46.170376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.170386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.170407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.170422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170629 | orchestrator | 2026-03-16 01:03:46.170645 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-16 01:03:46.170658 | orchestrator | Monday 16 March 2026 01:01:47 +0000 (0:00:07.497) 0:00:50.286 ********** 2026-03-16 01:03:46.170671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.170684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.170712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.170736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.170912 | orchestrator | 2026-03-16 01:03:46.170920 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-16 01:03:46.170929 | orchestrator | Monday 16 March 2026 01:02:08 +0000 (0:00:21.314) 0:01:11.600 ********** 2026-03-16 01:03:46.170937 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-16 01:03:46.170945 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-16 01:03:46.170953 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-16 01:03:46.170961 | orchestrator | 2026-03-16 01:03:46.170969 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-16 01:03:46.170977 | orchestrator | Monday 16 March 2026 01:02:15 +0000 (0:00:07.343) 0:01:18.944 ********** 2026-03-16 01:03:46.170985 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-16 01:03:46.170993 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-16 01:03:46.171006 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-16 01:03:46.171014 | orchestrator | 2026-03-16 01:03:46.171022 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-16 01:03:46.171030 | orchestrator | Monday 16 March 2026 01:02:19 +0000 (0:00:03.388) 0:01:22.332 ********** 2026-03-16 01:03:46.171038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.171051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.171063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.171072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171228 | orchestrator | 2026-03-16 01:03:46.171236 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-16 01:03:46.171244 | orchestrator | Monday 16 March 2026 01:02:22 +0000 (0:00:03.738) 0:01:26.071 ********** 2026-03-16 01:03:46.171252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.171264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.171273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.171286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171495 | orchestrator | 2026-03-16 01:03:46.171503 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-16 01:03:46.171510 | orchestrator | Monday 16 March 2026 01:02:25 +0000 (0:00:02.458) 0:01:28.529 ********** 2026-03-16 01:03:46.171519 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:46.171526 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:46.171534 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:46.171542 | orchestrator | 2026-03-16 01:03:46.171597 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-16 01:03:46.171605 | orchestrator | Monday 16 March 2026 01:02:26 +0000 (0:00:00.870) 0:01:29.399 ********** 2026-03-16 01:03:46.171614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.171622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 01:03:46.171634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171678 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:46.171686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.171695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 01:03:46.171706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171748 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:46.171756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-16 01:03:46.171764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-16 01:03:46.171776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:03:46.171817 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:46.171825 | orchestrator | 2026-03-16 01:03:46.171833 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-16 01:03:46.171841 | orchestrator | Monday 16 March 2026 01:02:27 +0000 (0:00:01.328) 0:01:30.728 ********** 2026-03-16 01:03:46.171849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.171864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.171880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-16 01:03:46.171889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.171999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.172008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.172019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.172026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:03:46.172033 | orchestrator | 2026-03-16 01:03:46.172040 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-16 01:03:46.172047 | orchestrator | Monday 16 March 2026 01:02:33 +0000 (0:00:05.964) 0:01:36.692 ********** 2026-03-16 01:03:46.172053 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:03:46.172060 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:03:46.172067 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:03:46.172073 | orchestrator | 2026-03-16 01:03:46.172080 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-16 01:03:46.172087 | orchestrator | Monday 16 March 2026 01:02:33 +0000 (0:00:00.277) 0:01:36.970 ********** 2026-03-16 01:03:46.172093 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-16 01:03:46.172100 | orchestrator | 2026-03-16 01:03:46.172107 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-16 01:03:46.172114 | orchestrator | Monday 16 March 2026 01:02:36 +0000 (0:00:02.419) 0:01:39.390 ********** 2026-03-16 01:03:46.172120 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-16 01:03:46.172127 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-16 01:03:46.172134 | orchestrator | 2026-03-16 01:03:46.172140 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-16 01:03:46.172147 | orchestrator | Monday 16 March 2026 01:02:39 +0000 (0:00:02.856) 0:01:42.247 ********** 2026-03-16 01:03:46.172154 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:46.172164 | orchestrator | 2026-03-16 01:03:46.172171 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-16 01:03:46.172177 | orchestrator | Monday 16 March 2026 01:02:55 +0000 (0:00:16.381) 0:01:58.628 ********** 2026-03-16 01:03:46.172184 | orchestrator | 2026-03-16 01:03:46.172191 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-16 01:03:46.172197 | orchestrator | Monday 16 March 2026 01:02:55 +0000 (0:00:00.225) 0:01:58.853 ********** 2026-03-16 01:03:46.172204 | orchestrator | 2026-03-16 01:03:46.172210 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-16 01:03:46.172217 | orchestrator | Monday 16 March 2026 01:02:55 +0000 (0:00:00.304) 0:01:59.158 ********** 2026-03-16 01:03:46.172224 | orchestrator | 2026-03-16 01:03:46.172230 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-16 01:03:46.172237 | orchestrator | Monday 16 March 2026 01:02:55 +0000 (0:00:00.069) 0:01:59.227 ********** 2026-03-16 01:03:46.172243 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:46.172250 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:46.172257 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:46.172263 | orchestrator | 2026-03-16 01:03:46.172270 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-16 01:03:46.172277 | orchestrator | Monday 16 March 2026 01:03:05 +0000 (0:00:09.215) 0:02:08.442 ********** 2026-03-16 01:03:46.172283 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:46.172292 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:46.172299 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:46.172306 | orchestrator | 2026-03-16 01:03:46.172312 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-16 01:03:46.172319 | orchestrator | Monday 16 March 2026 01:03:12 +0000 (0:00:07.477) 0:02:15.920 ********** 2026-03-16 01:03:46.172326 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:46.172332 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:46.172339 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:46.172345 | orchestrator | 2026-03-16 01:03:46.172352 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-16 01:03:46.172359 | orchestrator | Monday 16 March 2026 01:03:19 +0000 (0:00:06.824) 0:02:22.744 ********** 2026-03-16 01:03:46.172365 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:46.172372 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:46.172379 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:46.172385 | orchestrator | 2026-03-16 01:03:46.172392 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-16 01:03:46.172398 | orchestrator | Monday 16 March 2026 01:03:25 +0000 (0:00:06.282) 0:02:29.027 ********** 2026-03-16 01:03:46.172405 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:46.172412 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:46.172418 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:46.172425 | orchestrator | 2026-03-16 01:03:46.172431 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-16 01:03:46.172441 | orchestrator | Monday 16 March 2026 01:03:31 +0000 (0:00:05.564) 0:02:34.592 ********** 2026-03-16 01:03:46.172448 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:46.172455 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:03:46.172462 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:03:46.172468 | orchestrator | 2026-03-16 01:03:46.172475 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-16 01:03:46.172481 | orchestrator | Monday 16 March 2026 01:03:37 +0000 (0:00:05.791) 0:02:40.383 ********** 2026-03-16 01:03:46.172488 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:03:46.172494 | orchestrator | 2026-03-16 01:03:46.172501 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:03:46.172508 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 01:03:46.172520 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 01:03:46.172527 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 01:03:46.172533 | orchestrator | 2026-03-16 01:03:46.172540 | orchestrator | 2026-03-16 01:03:46.172557 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:03:46.172564 | orchestrator | Monday 16 March 2026 01:03:43 +0000 (0:00:06.564) 0:02:46.947 ********** 2026-03-16 01:03:46.172571 | orchestrator | =============================================================================== 2026-03-16 01:03:46.172577 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.31s 2026-03-16 01:03:46.172584 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.38s 2026-03-16 01:03:46.172590 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.22s 2026-03-16 01:03:46.172597 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.58s 2026-03-16 01:03:46.172604 | orchestrator | designate : Copying over config.json files for services ----------------- 7.50s 2026-03-16 01:03:46.172610 | orchestrator | designate : Restart designate-api container ----------------------------- 7.48s 2026-03-16 01:03:46.172617 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.34s 2026-03-16 01:03:46.172623 | orchestrator | designate : Restart designate-central container ------------------------- 6.82s 2026-03-16 01:03:46.172630 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.56s 2026-03-16 01:03:46.172636 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.39s 2026-03-16 01:03:46.172643 | orchestrator | designate : Restart designate-producer container ------------------------ 6.28s 2026-03-16 01:03:46.172650 | orchestrator | designate : Check designate containers ---------------------------------- 5.96s 2026-03-16 01:03:46.172656 | orchestrator | designate : Restart designate-worker container -------------------------- 5.79s 2026-03-16 01:03:46.172663 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.56s 2026-03-16 01:03:46.172669 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.37s 2026-03-16 01:03:46.172676 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.27s 2026-03-16 01:03:46.172682 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.09s 2026-03-16 01:03:46.172689 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.75s 2026-03-16 01:03:46.172696 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.74s 2026-03-16 01:03:46.172702 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.39s 2026-03-16 01:03:46.172709 | orchestrator | 2026-03-16 01:03:46 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:46.172715 | orchestrator | 2026-03-16 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:49.191117 | orchestrator | 2026-03-16 01:03:49 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:49.191227 | orchestrator | 2026-03-16 01:03:49 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:03:49.191670 | orchestrator | 2026-03-16 01:03:49 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:49.192496 | orchestrator | 2026-03-16 01:03:49 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:49.192531 | orchestrator | 2026-03-16 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:52.215069 | orchestrator | 2026-03-16 01:03:52 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:52.215299 | orchestrator | 2026-03-16 01:03:52 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:03:52.215859 | orchestrator | 2026-03-16 01:03:52 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:52.217017 | orchestrator | 2026-03-16 01:03:52 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:52.217051 | orchestrator | 2026-03-16 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:55.267633 | orchestrator | 2026-03-16 01:03:55 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:55.268289 | orchestrator | 2026-03-16 01:03:55 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:03:55.269875 | orchestrator | 2026-03-16 01:03:55 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:55.270187 | orchestrator | 2026-03-16 01:03:55 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:55.270265 | orchestrator | 2026-03-16 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:03:58.370243 | orchestrator | 2026-03-16 01:03:58 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:03:58.370982 | orchestrator | 2026-03-16 01:03:58 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:03:58.372488 | orchestrator | 2026-03-16 01:03:58 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:03:58.373343 | orchestrator | 2026-03-16 01:03:58 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:03:58.373399 | orchestrator | 2026-03-16 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:01.404217 | orchestrator | 2026-03-16 01:04:01 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:01.405883 | orchestrator | 2026-03-16 01:04:01 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:04:01.406371 | orchestrator | 2026-03-16 01:04:01 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:01.407293 | orchestrator | 2026-03-16 01:04:01 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:01.407331 | orchestrator | 2026-03-16 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:04.471376 | orchestrator | 2026-03-16 01:04:04 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:04.472247 | orchestrator | 2026-03-16 01:04:04 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:04:04.472299 | orchestrator | 2026-03-16 01:04:04 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:04.472413 | orchestrator | 2026-03-16 01:04:04 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:04.472428 | orchestrator | 2026-03-16 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:07.575092 | orchestrator | 2026-03-16 01:04:07 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:07.575144 | orchestrator | 2026-03-16 01:04:07 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:04:07.576914 | orchestrator | 2026-03-16 01:04:07 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:07.577359 | orchestrator | 2026-03-16 01:04:07 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:07.577470 | orchestrator | 2026-03-16 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:10.629314 | orchestrator | 2026-03-16 01:04:10 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:10.629423 | orchestrator | 2026-03-16 01:04:10 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:04:10.630855 | orchestrator | 2026-03-16 01:04:10 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:10.633170 | orchestrator | 2026-03-16 01:04:10 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:10.633231 | orchestrator | 2026-03-16 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:13.665252 | orchestrator | 2026-03-16 01:04:13 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:13.666351 | orchestrator | 2026-03-16 01:04:13 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:04:13.666646 | orchestrator | 2026-03-16 01:04:13 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:13.667922 | orchestrator | 2026-03-16 01:04:13 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:13.667971 | orchestrator | 2026-03-16 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:16.716012 | orchestrator | 2026-03-16 01:04:16 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:16.719706 | orchestrator | 2026-03-16 01:04:16 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:04:16.722173 | orchestrator | 2026-03-16 01:04:16 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:16.724455 | orchestrator | 2026-03-16 01:04:16 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:16.724744 | orchestrator | 2026-03-16 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:19.771876 | orchestrator | 2026-03-16 01:04:19 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:19.774076 | orchestrator | 2026-03-16 01:04:19 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:04:19.776508 | orchestrator | 2026-03-16 01:04:19 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:19.778288 | orchestrator | 2026-03-16 01:04:19 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:19.778326 | orchestrator | 2026-03-16 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:22.828280 | orchestrator | 2026-03-16 01:04:22 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:22.828364 | orchestrator | 2026-03-16 01:04:22 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state STARTED 2026-03-16 01:04:22.830899 | orchestrator | 2026-03-16 01:04:22 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:22.831301 | orchestrator | 2026-03-16 01:04:22 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:22.831326 | orchestrator | 2026-03-16 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:25.880843 | orchestrator | 2026-03-16 01:04:25 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:25.881545 | orchestrator | 2026-03-16 01:04:25 | INFO  | Task cfabc90d-a51c-4527-8641-6ad8f3921e75 is in state SUCCESS 2026-03-16 01:04:25.888283 | orchestrator | 2026-03-16 01:04:25 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:25.889128 | orchestrator | 2026-03-16 01:04:25 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:25.889946 | orchestrator | 2026-03-16 01:04:25 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:25.890042 | orchestrator | 2026-03-16 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:28.921896 | orchestrator | 2026-03-16 01:04:28 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:28.922555 | orchestrator | 2026-03-16 01:04:28 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:28.923219 | orchestrator | 2026-03-16 01:04:28 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:28.926320 | orchestrator | 2026-03-16 01:04:28 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:28.926373 | orchestrator | 2026-03-16 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:31.960073 | orchestrator | 2026-03-16 01:04:31 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:31.960647 | orchestrator | 2026-03-16 01:04:31 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:31.961360 | orchestrator | 2026-03-16 01:04:31 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:31.961943 | orchestrator | 2026-03-16 01:04:31 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:31.962081 | orchestrator | 2026-03-16 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:34.990246 | orchestrator | 2026-03-16 01:04:34 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:34.990875 | orchestrator | 2026-03-16 01:04:34 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:34.992304 | orchestrator | 2026-03-16 01:04:34 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:34.993078 | orchestrator | 2026-03-16 01:04:34 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:34.993226 | orchestrator | 2026-03-16 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:38.049621 | orchestrator | 2026-03-16 01:04:38 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:38.054276 | orchestrator | 2026-03-16 01:04:38 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:38.054334 | orchestrator | 2026-03-16 01:04:38 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:38.055534 | orchestrator | 2026-03-16 01:04:38 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:38.055583 | orchestrator | 2026-03-16 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:41.103153 | orchestrator | 2026-03-16 01:04:41 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:41.103242 | orchestrator | 2026-03-16 01:04:41 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:41.106152 | orchestrator | 2026-03-16 01:04:41 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:41.107903 | orchestrator | 2026-03-16 01:04:41 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:41.107970 | orchestrator | 2026-03-16 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:44.153354 | orchestrator | 2026-03-16 01:04:44 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:44.153654 | orchestrator | 2026-03-16 01:04:44 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:44.154770 | orchestrator | 2026-03-16 01:04:44 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:44.155533 | orchestrator | 2026-03-16 01:04:44 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:44.155562 | orchestrator | 2026-03-16 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:47.194057 | orchestrator | 2026-03-16 01:04:47 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:47.195226 | orchestrator | 2026-03-16 01:04:47 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:47.196309 | orchestrator | 2026-03-16 01:04:47 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:47.197738 | orchestrator | 2026-03-16 01:04:47 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:47.197843 | orchestrator | 2026-03-16 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:50.252735 | orchestrator | 2026-03-16 01:04:50 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:50.252781 | orchestrator | 2026-03-16 01:04:50 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:50.252787 | orchestrator | 2026-03-16 01:04:50 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:50.252791 | orchestrator | 2026-03-16 01:04:50 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:50.252796 | orchestrator | 2026-03-16 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:53.299204 | orchestrator | 2026-03-16 01:04:53 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:53.300440 | orchestrator | 2026-03-16 01:04:53 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:53.302270 | orchestrator | 2026-03-16 01:04:53 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:53.304924 | orchestrator | 2026-03-16 01:04:53 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:53.304992 | orchestrator | 2026-03-16 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:56.351182 | orchestrator | 2026-03-16 01:04:56 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:56.352350 | orchestrator | 2026-03-16 01:04:56 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:56.353823 | orchestrator | 2026-03-16 01:04:56 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:56.355052 | orchestrator | 2026-03-16 01:04:56 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:56.355102 | orchestrator | 2026-03-16 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:04:59.398439 | orchestrator | 2026-03-16 01:04:59 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:04:59.398499 | orchestrator | 2026-03-16 01:04:59 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:04:59.400269 | orchestrator | 2026-03-16 01:04:59 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state STARTED 2026-03-16 01:04:59.406351 | orchestrator | 2026-03-16 01:04:59 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:04:59.406447 | orchestrator | 2026-03-16 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:02.450215 | orchestrator | 2026-03-16 01:05:02 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:05:02.451581 | orchestrator | 2026-03-16 01:05:02 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:02.452653 | orchestrator | 2026-03-16 01:05:02 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:02.453899 | orchestrator | 2026-03-16 01:05:02 | INFO  | Task 9a6de2c1-0daa-4690-beb5-321883059544 is in state SUCCESS 2026-03-16 01:05:02.455093 | orchestrator | 2026-03-16 01:05:02.455125 | orchestrator | 2026-03-16 01:05:02.455133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:05:02.455139 | orchestrator | 2026-03-16 01:05:02.455146 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:05:02.455152 | orchestrator | Monday 16 March 2026 01:03:53 +0000 (0:00:00.552) 0:00:00.552 ********** 2026-03-16 01:05:02.455159 | orchestrator | ok: [testbed-manager] 2026-03-16 01:05:02.455166 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:05:02.455172 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:05:02.455178 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:05:02.455184 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:05:02.455189 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:05:02.455194 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:05:02.455200 | orchestrator | 2026-03-16 01:05:02.455205 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:05:02.455211 | orchestrator | Monday 16 March 2026 01:03:54 +0000 (0:00:01.028) 0:00:01.581 ********** 2026-03-16 01:05:02.455217 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-16 01:05:02.455222 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-16 01:05:02.455228 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-16 01:05:02.455233 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-16 01:05:02.455238 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-16 01:05:02.455244 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-16 01:05:02.455249 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-16 01:05:02.455255 | orchestrator | 2026-03-16 01:05:02.455261 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-16 01:05:02.455266 | orchestrator | 2026-03-16 01:05:02.455272 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-16 01:05:02.455277 | orchestrator | Monday 16 March 2026 01:03:54 +0000 (0:00:00.636) 0:00:02.218 ********** 2026-03-16 01:05:02.455283 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 01:05:02.455289 | orchestrator | 2026-03-16 01:05:02.455295 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-16 01:05:02.455300 | orchestrator | Monday 16 March 2026 01:03:56 +0000 (0:00:01.718) 0:00:03.936 ********** 2026-03-16 01:05:02.455306 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-16 01:05:02.455311 | orchestrator | 2026-03-16 01:05:02.455315 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-16 01:05:02.455320 | orchestrator | Monday 16 March 2026 01:03:59 +0000 (0:00:03.491) 0:00:07.428 ********** 2026-03-16 01:05:02.455326 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-16 01:05:02.455340 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-16 01:05:02.455346 | orchestrator | 2026-03-16 01:05:02.455391 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-16 01:05:02.455398 | orchestrator | Monday 16 March 2026 01:04:06 +0000 (0:00:06.098) 0:00:13.526 ********** 2026-03-16 01:05:02.455404 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-16 01:05:02.455410 | orchestrator | 2026-03-16 01:05:02.455415 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-16 01:05:02.455421 | orchestrator | Monday 16 March 2026 01:04:08 +0000 (0:00:02.908) 0:00:16.434 ********** 2026-03-16 01:05:02.455427 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-16 01:05:02.455433 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-16 01:05:02.455438 | orchestrator | 2026-03-16 01:05:02.455444 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-16 01:05:02.455480 | orchestrator | Monday 16 March 2026 01:04:12 +0000 (0:00:03.444) 0:00:19.879 ********** 2026-03-16 01:05:02.455487 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-16 01:05:02.455523 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-16 01:05:02.455529 | orchestrator | 2026-03-16 01:05:02.455535 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-16 01:05:02.455541 | orchestrator | Monday 16 March 2026 01:04:17 +0000 (0:00:05.417) 0:00:25.296 ********** 2026-03-16 01:05:02.455547 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-16 01:05:02.455552 | orchestrator | 2026-03-16 01:05:02.455558 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:05:02.455564 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:05:02.455570 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:05:02.455576 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:05:02.455582 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:05:02.455588 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:05:02.455602 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:05:02.455607 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:05:02.455612 | orchestrator | 2026-03-16 01:05:02.455618 | orchestrator | 2026-03-16 01:05:02.455623 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:05:02.455628 | orchestrator | Monday 16 March 2026 01:04:22 +0000 (0:00:04.617) 0:00:29.914 ********** 2026-03-16 01:05:02.455633 | orchestrator | =============================================================================== 2026-03-16 01:05:02.455639 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.10s 2026-03-16 01:05:02.455644 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.42s 2026-03-16 01:05:02.455650 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.62s 2026-03-16 01:05:02.455655 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.49s 2026-03-16 01:05:02.455661 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.44s 2026-03-16 01:05:02.455666 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.91s 2026-03-16 01:05:02.455671 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.72s 2026-03-16 01:05:02.455677 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.03s 2026-03-16 01:05:02.455700 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-03-16 01:05:02.455706 | orchestrator | 2026-03-16 01:05:02.455712 | orchestrator | 2026-03-16 01:05:02.455718 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:05:02.455724 | orchestrator | 2026-03-16 01:05:02.455730 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:05:02.455735 | orchestrator | Monday 16 March 2026 01:03:06 +0000 (0:00:00.480) 0:00:00.480 ********** 2026-03-16 01:05:02.455741 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:05:02.455759 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:05:02.455764 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:05:02.455769 | orchestrator | 2026-03-16 01:05:02.455774 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:05:02.455780 | orchestrator | Monday 16 March 2026 01:03:06 +0000 (0:00:00.690) 0:00:01.171 ********** 2026-03-16 01:05:02.455785 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-16 01:05:02.455790 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-16 01:05:02.455796 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-16 01:05:02.455801 | orchestrator | 2026-03-16 01:05:02.455806 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-16 01:05:02.455811 | orchestrator | 2026-03-16 01:05:02.455817 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-16 01:05:02.455822 | orchestrator | Monday 16 March 2026 01:03:07 +0000 (0:00:01.110) 0:00:02.281 ********** 2026-03-16 01:05:02.455831 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:05:02.455837 | orchestrator | 2026-03-16 01:05:02.455842 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-16 01:05:02.455847 | orchestrator | Monday 16 March 2026 01:03:08 +0000 (0:00:00.992) 0:00:03.274 ********** 2026-03-16 01:05:02.455852 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-16 01:05:02.455858 | orchestrator | 2026-03-16 01:05:02.455863 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-16 01:05:02.455868 | orchestrator | Monday 16 March 2026 01:03:12 +0000 (0:00:03.419) 0:00:06.693 ********** 2026-03-16 01:05:02.455873 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-16 01:05:02.455879 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-16 01:05:02.455884 | orchestrator | 2026-03-16 01:05:02.455889 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-16 01:05:02.455894 | orchestrator | Monday 16 March 2026 01:03:18 +0000 (0:00:06.375) 0:00:13.069 ********** 2026-03-16 01:05:02.455900 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-16 01:05:02.455905 | orchestrator | 2026-03-16 01:05:02.455910 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-16 01:05:02.455916 | orchestrator | Monday 16 March 2026 01:03:21 +0000 (0:00:02.882) 0:00:15.951 ********** 2026-03-16 01:05:02.455921 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-16 01:05:02.455926 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-16 01:05:02.455931 | orchestrator | 2026-03-16 01:05:02.455937 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-16 01:05:02.455943 | orchestrator | Monday 16 March 2026 01:03:25 +0000 (0:00:03.703) 0:00:19.654 ********** 2026-03-16 01:05:02.455948 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-16 01:05:02.455953 | orchestrator | 2026-03-16 01:05:02.455958 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-16 01:05:02.455963 | orchestrator | Monday 16 March 2026 01:03:28 +0000 (0:00:03.308) 0:00:22.962 ********** 2026-03-16 01:05:02.455969 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-16 01:05:02.455978 | orchestrator | 2026-03-16 01:05:02.455984 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-16 01:05:02.455989 | orchestrator | Monday 16 March 2026 01:03:32 +0000 (0:00:03.897) 0:00:26.860 ********** 2026-03-16 01:05:02.455994 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:02.456000 | orchestrator | 2026-03-16 01:05:02.456005 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-16 01:05:02.456016 | orchestrator | Monday 16 March 2026 01:03:35 +0000 (0:00:03.138) 0:00:29.998 ********** 2026-03-16 01:05:02.456022 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:02.456027 | orchestrator | 2026-03-16 01:05:02.456033 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-16 01:05:02.456038 | orchestrator | Monday 16 March 2026 01:03:39 +0000 (0:00:03.513) 0:00:33.512 ********** 2026-03-16 01:05:02.456043 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:02.456049 | orchestrator | 2026-03-16 01:05:02.456054 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-16 01:05:02.456060 | orchestrator | Monday 16 March 2026 01:03:42 +0000 (0:00:02.960) 0:00:36.473 ********** 2026-03-16 01:05:02.456067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456115 | orchestrator | 2026-03-16 01:05:02.456120 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-16 01:05:02.456125 | orchestrator | Monday 16 March 2026 01:03:44 +0000 (0:00:02.231) 0:00:38.704 ********** 2026-03-16 01:05:02.456131 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:02.456136 | orchestrator | 2026-03-16 01:05:02.456142 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-16 01:05:02.456148 | orchestrator | Monday 16 March 2026 01:03:44 +0000 (0:00:00.282) 0:00:38.987 ********** 2026-03-16 01:05:02.456153 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:02.456158 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:02.456164 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:02.456169 | orchestrator | 2026-03-16 01:05:02.456175 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-16 01:05:02.456180 | orchestrator | Monday 16 March 2026 01:03:45 +0000 (0:00:01.149) 0:00:40.136 ********** 2026-03-16 01:05:02.456186 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 01:05:02.456192 | orchestrator | 2026-03-16 01:05:02.456199 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-16 01:05:02.456205 | orchestrator | Monday 16 March 2026 01:03:47 +0000 (0:00:01.575) 0:00:41.712 ********** 2026-03-16 01:05:02.456211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456260 | orchestrator | 2026-03-16 01:05:02.456265 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-16 01:05:02.456271 | orchestrator | Monday 16 March 2026 01:03:50 +0000 (0:00:03.476) 0:00:45.188 ********** 2026-03-16 01:05:02.456276 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:05:02.456281 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:05:02.456287 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:05:02.456293 | orchestrator | 2026-03-16 01:05:02.456298 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-16 01:05:02.456304 | orchestrator | Monday 16 March 2026 01:03:51 +0000 (0:00:00.438) 0:00:45.627 ********** 2026-03-16 01:05:02.456309 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:05:02.456314 | orchestrator | 2026-03-16 01:05:02.456319 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-16 01:05:02.456324 | orchestrator | Monday 16 March 2026 01:03:52 +0000 (0:00:01.241) 0:00:46.869 ********** 2026-03-16 01:05:02.456335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456414 | orchestrator | 2026-03-16 01:05:02.456420 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-16 01:05:02.456426 | orchestrator | Monday 16 March 2026 01:03:55 +0000 (0:00:02.660) 0:00:49.529 ********** 2026-03-16 01:05:02.456432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 01:05:02.456440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:05:02.456453 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:02.456459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 01:05:02.456468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 01:05:02.456475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:05:02.456481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:05:02.456490 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:02.456496 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:02.456502 | orchestrator | 2026-03-16 01:05:02.456507 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-16 01:05:02.456513 | orchestrator | Monday 16 March 2026 01:03:55 +0000 (0:00:00.572) 0:00:50.102 ********** 2026-03-16 01:05:02.456521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 01:05:02.456527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:05:02.456532 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:02.456712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 01:05:02.456727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:05:02.456733 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:02.456744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 01:05:02.456754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:05:02.456761 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:02.456767 | orchestrator | 2026-03-16 01:05:02.456773 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-16 01:05:02.456779 | orchestrator | Monday 16 March 2026 01:03:56 +0000 (0:00:01.117) 0:00:51.219 ********** 2026-03-16 01:05:02.456784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456833 | orchestrator | 2026-03-16 01:05:02.456842 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-16 01:05:02.456848 | orchestrator | Monday 16 March 2026 01:03:59 +0000 (0:00:02.484) 0:00:53.703 ********** 2026-03-16 01:05:02.456854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.456898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.456924 | orchestrator | 2026-03-16 01:05:02.456930 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-16 01:05:02.456936 | orchestrator | Monday 16 March 2026 01:04:07 +0000 (0:00:08.244) 0:01:01.947 ********** 2026-03-16 01:05:02.456944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 01:05:02.456950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:05:02.456956 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:02.456963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 01:05:02.456974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:05:02.456985 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:02.456992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-16 01:05:02.457000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:05:02.457006 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:02.457012 | orchestrator | 2026-03-16 01:05:02.457018 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-16 01:05:02.457024 | orchestrator | Monday 16 March 2026 01:04:08 +0000 (0:00:00.859) 0:01:02.807 ********** 2026-03-16 01:05:02.457031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.457040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.457047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-16 01:05:02.457056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.457065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.457071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:05:02.457078 | orchestrator | 2026-03-16 01:05:02.457084 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-16 01:05:02.457090 | orchestrator | Monday 16 March 2026 01:04:11 +0000 (0:00:02.629) 0:01:05.436 ********** 2026-03-16 01:05:02.457110 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:02.457117 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:02.457123 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:02.457128 | orchestrator | 2026-03-16 01:05:02.457134 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-16 01:05:02.457140 | orchestrator | Monday 16 March 2026 01:04:11 +0000 (0:00:00.279) 0:01:05.716 ********** 2026-03-16 01:05:02.457146 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:02.457151 | orchestrator | 2026-03-16 01:05:02.457158 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-16 01:05:02.457170 | orchestrator | Monday 16 March 2026 01:04:13 +0000 (0:00:02.333) 0:01:08.049 ********** 2026-03-16 01:05:02.457176 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:02.457182 | orchestrator | 2026-03-16 01:05:02.457187 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-16 01:05:02.457196 | orchestrator | Monday 16 March 2026 01:04:16 +0000 (0:00:02.579) 0:01:10.628 ********** 2026-03-16 01:05:02.457202 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:02.457207 | orchestrator | 2026-03-16 01:05:02.457212 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-16 01:05:02.457218 | orchestrator | Monday 16 March 2026 01:04:31 +0000 (0:00:15.083) 0:01:25.712 ********** 2026-03-16 01:05:02.457223 | orchestrator | 2026-03-16 01:05:02.457229 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-16 01:05:02.457234 | orchestrator | Monday 16 March 2026 01:04:31 +0000 (0:00:00.066) 0:01:25.779 ********** 2026-03-16 01:05:02.457240 | orchestrator | 2026-03-16 01:05:02.457246 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-16 01:05:02.457251 | orchestrator | Monday 16 March 2026 01:04:31 +0000 (0:00:00.062) 0:01:25.841 ********** 2026-03-16 01:05:02.457257 | orchestrator | 2026-03-16 01:05:02.457262 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-16 01:05:02.457268 | orchestrator | Monday 16 March 2026 01:04:31 +0000 (0:00:00.064) 0:01:25.906 ********** 2026-03-16 01:05:02.457274 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:02.457279 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:05:02.457285 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:05:02.457290 | orchestrator | 2026-03-16 01:05:02.457296 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-16 01:05:02.457302 | orchestrator | Monday 16 March 2026 01:04:50 +0000 (0:00:18.480) 0:01:44.387 ********** 2026-03-16 01:05:02.457307 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:02.457312 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:05:02.457317 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:05:02.457322 | orchestrator | 2026-03-16 01:05:02.457328 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:05:02.457334 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-16 01:05:02.457342 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-16 01:05:02.457348 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-16 01:05:02.457356 | orchestrator | 2026-03-16 01:05:02.457362 | orchestrator | 2026-03-16 01:05:02.457381 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:05:02.457388 | orchestrator | Monday 16 March 2026 01:04:59 +0000 (0:00:09.425) 0:01:53.812 ********** 2026-03-16 01:05:02.457393 | orchestrator | =============================================================================== 2026-03-16 01:05:02.457400 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.48s 2026-03-16 01:05:02.457406 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.08s 2026-03-16 01:05:02.457416 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.43s 2026-03-16 01:05:02.457423 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 8.24s 2026-03-16 01:05:02.457467 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.38s 2026-03-16 01:05:02.457475 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.90s 2026-03-16 01:05:02.457484 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.70s 2026-03-16 01:05:02.457490 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.51s 2026-03-16 01:05:02.457500 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.48s 2026-03-16 01:05:02.457506 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.42s 2026-03-16 01:05:02.457512 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.31s 2026-03-16 01:05:02.457518 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.14s 2026-03-16 01:05:02.457525 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 2.96s 2026-03-16 01:05:02.457531 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.88s 2026-03-16 01:05:02.457537 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.66s 2026-03-16 01:05:02.457544 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.63s 2026-03-16 01:05:02.457550 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.58s 2026-03-16 01:05:02.457556 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.48s 2026-03-16 01:05:02.457563 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.33s 2026-03-16 01:05:02.457569 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.23s 2026-03-16 01:05:02.457574 | orchestrator | 2026-03-16 01:05:02 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:02.457580 | orchestrator | 2026-03-16 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:05.495059 | orchestrator | 2026-03-16 01:05:05 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:05:05.495117 | orchestrator | 2026-03-16 01:05:05 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:05.495126 | orchestrator | 2026-03-16 01:05:05 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:05.495609 | orchestrator | 2026-03-16 01:05:05 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:05.495701 | orchestrator | 2026-03-16 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:08.538250 | orchestrator | 2026-03-16 01:05:08 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:05:08.538601 | orchestrator | 2026-03-16 01:05:08 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:08.539794 | orchestrator | 2026-03-16 01:05:08 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:08.540361 | orchestrator | 2026-03-16 01:05:08 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:08.540535 | orchestrator | 2026-03-16 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:11.573178 | orchestrator | 2026-03-16 01:05:11 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state STARTED 2026-03-16 01:05:11.575010 | orchestrator | 2026-03-16 01:05:11 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:11.577242 | orchestrator | 2026-03-16 01:05:11 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:11.578269 | orchestrator | 2026-03-16 01:05:11 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:11.578298 | orchestrator | 2026-03-16 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:14.606867 | orchestrator | 2026-03-16 01:05:14 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:14.607006 | orchestrator | 2026-03-16 01:05:14 | INFO  | Task e8f062de-43e9-4024-834e-17eb57f66b49 is in state SUCCESS 2026-03-16 01:05:14.608294 | orchestrator | 2026-03-16 01:05:14.608457 | orchestrator | 2026-03-16 01:05:14.608474 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:05:14.608482 | orchestrator | 2026-03-16 01:05:14.608489 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:05:14.608497 | orchestrator | Monday 16 March 2026 01:00:56 +0000 (0:00:00.263) 0:00:00.263 ********** 2026-03-16 01:05:14.608503 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:05:14.608510 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:05:14.608516 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:05:14.608523 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:05:14.608530 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:05:14.608644 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:05:14.608653 | orchestrator | 2026-03-16 01:05:14.608673 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:05:14.608681 | orchestrator | Monday 16 March 2026 01:00:57 +0000 (0:00:00.775) 0:00:01.038 ********** 2026-03-16 01:05:14.608688 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-16 01:05:14.608695 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-16 01:05:14.608701 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-16 01:05:14.608707 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-16 01:05:14.608713 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-16 01:05:14.608787 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-16 01:05:14.608793 | orchestrator | 2026-03-16 01:05:14.608797 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-16 01:05:14.608801 | orchestrator | 2026-03-16 01:05:14.608804 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-16 01:05:14.608808 | orchestrator | Monday 16 March 2026 01:00:58 +0000 (0:00:00.611) 0:00:01.649 ********** 2026-03-16 01:05:14.608813 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 01:05:14.608818 | orchestrator | 2026-03-16 01:05:14.608822 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-16 01:05:14.608826 | orchestrator | Monday 16 March 2026 01:00:59 +0000 (0:00:01.068) 0:00:02.718 ********** 2026-03-16 01:05:14.608830 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:05:14.608834 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:05:14.608838 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:05:14.608841 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:05:14.608845 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:05:14.608849 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:05:14.608852 | orchestrator | 2026-03-16 01:05:14.608856 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-16 01:05:14.608860 | orchestrator | Monday 16 March 2026 01:01:00 +0000 (0:00:01.302) 0:00:04.020 ********** 2026-03-16 01:05:14.608864 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:05:14.608868 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:05:14.608871 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:05:14.608876 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:05:14.608879 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:05:14.608883 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:05:14.608887 | orchestrator | 2026-03-16 01:05:14.608891 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-16 01:05:14.608895 | orchestrator | Monday 16 March 2026 01:01:01 +0000 (0:00:01.119) 0:00:05.140 ********** 2026-03-16 01:05:14.608898 | orchestrator | ok: [testbed-node-0] => { 2026-03-16 01:05:14.608903 | orchestrator |  "changed": false, 2026-03-16 01:05:14.608907 | orchestrator |  "msg": "All assertions passed" 2026-03-16 01:05:14.608911 | orchestrator | } 2026-03-16 01:05:14.608915 | orchestrator | ok: [testbed-node-1] => { 2026-03-16 01:05:14.608919 | orchestrator |  "changed": false, 2026-03-16 01:05:14.608923 | orchestrator |  "msg": "All assertions passed" 2026-03-16 01:05:14.608926 | orchestrator | } 2026-03-16 01:05:14.608938 | orchestrator | ok: [testbed-node-2] => { 2026-03-16 01:05:14.608942 | orchestrator |  "changed": false, 2026-03-16 01:05:14.608946 | orchestrator |  "msg": "All assertions passed" 2026-03-16 01:05:14.608950 | orchestrator | } 2026-03-16 01:05:14.608953 | orchestrator | ok: [testbed-node-3] => { 2026-03-16 01:05:14.608957 | orchestrator |  "changed": false, 2026-03-16 01:05:14.608961 | orchestrator |  "msg": "All assertions passed" 2026-03-16 01:05:14.608964 | orchestrator | } 2026-03-16 01:05:14.608968 | orchestrator | ok: [testbed-node-4] => { 2026-03-16 01:05:14.608972 | orchestrator |  "changed": false, 2026-03-16 01:05:14.608976 | orchestrator |  "msg": "All assertions passed" 2026-03-16 01:05:14.608979 | orchestrator | } 2026-03-16 01:05:14.608983 | orchestrator | ok: [testbed-node-5] => { 2026-03-16 01:05:14.608987 | orchestrator |  "changed": false, 2026-03-16 01:05:14.608991 | orchestrator |  "msg": "All assertions passed" 2026-03-16 01:05:14.608994 | orchestrator | } 2026-03-16 01:05:14.608998 | orchestrator | 2026-03-16 01:05:14.609002 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-16 01:05:14.609006 | orchestrator | Monday 16 March 2026 01:01:02 +0000 (0:00:00.746) 0:00:05.887 ********** 2026-03-16 01:05:14.609009 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.609013 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.609017 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.609021 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.609024 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.609028 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.609046 | orchestrator | 2026-03-16 01:05:14.609050 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-16 01:05:14.609054 | orchestrator | Monday 16 March 2026 01:01:03 +0000 (0:00:00.623) 0:00:06.510 ********** 2026-03-16 01:05:14.609058 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-16 01:05:14.609062 | orchestrator | 2026-03-16 01:05:14.609066 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-16 01:05:14.609069 | orchestrator | Monday 16 March 2026 01:01:06 +0000 (0:00:03.575) 0:00:10.086 ********** 2026-03-16 01:05:14.609074 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-16 01:05:14.609078 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-16 01:05:14.609093 | orchestrator | 2026-03-16 01:05:14.609111 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-16 01:05:14.609116 | orchestrator | Monday 16 March 2026 01:01:13 +0000 (0:00:06.785) 0:00:16.871 ********** 2026-03-16 01:05:14.609120 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-16 01:05:14.609124 | orchestrator | 2026-03-16 01:05:14.609128 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-16 01:05:14.609131 | orchestrator | Monday 16 March 2026 01:01:17 +0000 (0:00:03.705) 0:00:20.577 ********** 2026-03-16 01:05:14.609135 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-16 01:05:14.609139 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-16 01:05:14.609143 | orchestrator | 2026-03-16 01:05:14.609151 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-16 01:05:14.609155 | orchestrator | Monday 16 March 2026 01:01:21 +0000 (0:00:04.036) 0:00:24.613 ********** 2026-03-16 01:05:14.609159 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-16 01:05:14.609163 | orchestrator | 2026-03-16 01:05:14.609167 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-16 01:05:14.609171 | orchestrator | Monday 16 March 2026 01:01:25 +0000 (0:00:03.810) 0:00:28.424 ********** 2026-03-16 01:05:14.609174 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-16 01:05:14.609178 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-16 01:05:14.609182 | orchestrator | 2026-03-16 01:05:14.609192 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-16 01:05:14.609196 | orchestrator | Monday 16 March 2026 01:01:32 +0000 (0:00:07.213) 0:00:35.637 ********** 2026-03-16 01:05:14.609200 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.609204 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.609208 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.609215 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.609221 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.609227 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.609232 | orchestrator | 2026-03-16 01:05:14.609238 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-16 01:05:14.609244 | orchestrator | Monday 16 March 2026 01:01:32 +0000 (0:00:00.626) 0:00:36.264 ********** 2026-03-16 01:05:14.609250 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.609256 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.609262 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.609267 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.609273 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.609280 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.609286 | orchestrator | 2026-03-16 01:05:14.609292 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-16 01:05:14.609298 | orchestrator | Monday 16 March 2026 01:01:34 +0000 (0:00:01.904) 0:00:38.169 ********** 2026-03-16 01:05:14.609305 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:05:14.609310 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:05:14.609314 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:05:14.609319 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:05:14.609325 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:05:14.609330 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:05:14.609374 | orchestrator | 2026-03-16 01:05:14.609385 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-16 01:05:14.609390 | orchestrator | Monday 16 March 2026 01:01:35 +0000 (0:00:01.006) 0:00:39.176 ********** 2026-03-16 01:05:14.609395 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.609401 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.609407 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.609412 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.609419 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.609424 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.609430 | orchestrator | 2026-03-16 01:05:14.609436 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-16 01:05:14.609442 | orchestrator | Monday 16 March 2026 01:01:38 +0000 (0:00:02.612) 0:00:41.788 ********** 2026-03-16 01:05:14.609453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.609474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.609493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.609502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.609510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.609516 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.609522 | orchestrator | 2026-03-16 01:05:14.609529 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-16 01:05:14.609536 | orchestrator | Monday 16 March 2026 01:01:41 +0000 (0:00:03.546) 0:00:45.334 ********** 2026-03-16 01:05:14.609551 | orchestrator | [WARNING]: Skipped 2026-03-16 01:05:14.609557 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-16 01:05:14.609564 | orchestrator | due to this access issue: 2026-03-16 01:05:14.609570 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-16 01:05:14.609576 | orchestrator | a directory 2026-03-16 01:05:14.609583 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 01:05:14.609589 | orchestrator | 2026-03-16 01:05:14.609600 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-16 01:05:14.609607 | orchestrator | Monday 16 March 2026 01:01:42 +0000 (0:00:00.818) 0:00:46.152 ********** 2026-03-16 01:05:14.609614 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 01:05:14.609622 | orchestrator | 2026-03-16 01:05:14.609627 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-16 01:05:14.609634 | orchestrator | Monday 16 March 2026 01:01:43 +0000 (0:00:00.966) 0:00:47.119 ********** 2026-03-16 01:05:14.609646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.609653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.609661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.609667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.609686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.609705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.609712 | orchestrator | 2026-03-16 01:05:14.609719 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-16 01:05:14.609725 | orchestrator | Monday 16 March 2026 01:01:46 +0000 (0:00:02.908) 0:00:50.028 ********** 2026-03-16 01:05:14.609733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.609740 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.609747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.609758 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.609769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.609780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.609787 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.609794 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.609800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.609806 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.609812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.609820 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.609827 | orchestrator | 2026-03-16 01:05:14.609839 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-16 01:05:14.609845 | orchestrator | Monday 16 March 2026 01:01:50 +0000 (0:00:03.946) 0:00:53.974 ********** 2026-03-16 01:05:14.609853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.609860 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.609875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.609882 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.609889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.609896 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.609903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.609910 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.609923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.609932 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.609939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.609946 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.609952 | orchestrator | 2026-03-16 01:05:14.609959 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-16 01:05:14.609971 | orchestrator | Monday 16 March 2026 01:01:53 +0000 (0:00:03.269) 0:00:57.244 ********** 2026-03-16 01:05:14.609978 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.609985 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.609990 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.609996 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.610002 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.610007 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.610069 | orchestrator | 2026-03-16 01:05:14.610079 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-16 01:05:14.610086 | orchestrator | Monday 16 March 2026 01:01:56 +0000 (0:00:02.721) 0:00:59.965 ********** 2026-03-16 01:05:14.610092 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.610138 | orchestrator | 2026-03-16 01:05:14.610152 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-16 01:05:14.610159 | orchestrator | Monday 16 March 2026 01:01:56 +0000 (0:00:00.224) 0:01:00.189 ********** 2026-03-16 01:05:14.610166 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.610172 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.610179 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.610185 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.610192 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.610199 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.610206 | orchestrator | 2026-03-16 01:05:14.610214 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-16 01:05:14.610221 | orchestrator | Monday 16 March 2026 01:01:57 +0000 (0:00:00.660) 0:01:00.850 ********** 2026-03-16 01:05:14.610230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.610247 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.610254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.610261 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.610268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.610275 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.610530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.610587 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.610595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.610615 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.610620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.610624 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.610628 | orchestrator | 2026-03-16 01:05:14.610633 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-16 01:05:14.610638 | orchestrator | Monday 16 March 2026 01:02:00 +0000 (0:00:03.023) 0:01:03.874 ********** 2026-03-16 01:05:14.610643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.610657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.610666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.610670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.610679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.610683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.610688 | orchestrator | 2026-03-16 01:05:14.610692 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-16 01:05:14.610696 | orchestrator | Monday 16 March 2026 01:02:05 +0000 (0:00:05.173) 0:01:09.047 ********** 2026-03-16 01:05:14.610704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.610712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.610723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.610730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.610740 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.610752 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.610758 | orchestrator | 2026-03-16 01:05:14.610764 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-16 01:05:14.610771 | orchestrator | Monday 16 March 2026 01:02:13 +0000 (0:00:08.214) 0:01:17.262 ********** 2026-03-16 01:05:14.610781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.610793 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.610800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.610806 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.610812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.610819 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.610825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.610832 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.610843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.610854 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.610861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.610868 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.610874 | orchestrator | 2026-03-16 01:05:14.610881 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-16 01:05:14.610887 | orchestrator | Monday 16 March 2026 01:02:16 +0000 (0:00:02.804) 0:01:20.066 ********** 2026-03-16 01:05:14.610894 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.610911 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.610918 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.610925 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:05:14.610932 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:14.610939 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:05:14.610946 | orchestrator | 2026-03-16 01:05:14.610954 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-16 01:05:14.610961 | orchestrator | Monday 16 March 2026 01:02:19 +0000 (0:00:03.015) 0:01:23.082 ********** 2026-03-16 01:05:14.610969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.610977 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.610985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.610992 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.611036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.611053 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.611065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.611073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.611080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.611087 | orchestrator | 2026-03-16 01:05:14.611095 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-16 01:05:14.611103 | orchestrator | Monday 16 March 2026 01:02:23 +0000 (0:00:04.146) 0:01:27.228 ********** 2026-03-16 01:05:14.611110 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.611118 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.611124 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.611131 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.611139 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.611146 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.611160 | orchestrator | 2026-03-16 01:05:14.611167 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-16 01:05:14.611175 | orchestrator | Monday 16 March 2026 01:02:26 +0000 (0:00:03.063) 0:01:30.292 ********** 2026-03-16 01:05:14.611183 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.611190 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.611198 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.611205 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.611212 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.611219 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.611227 | orchestrator | 2026-03-16 01:05:14.611235 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-16 01:05:14.611243 | orchestrator | Monday 16 March 2026 01:02:29 +0000 (0:00:02.555) 0:01:32.847 ********** 2026-03-16 01:05:14.611256 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.611263 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.611270 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.611277 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.611283 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.611291 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.611299 | orchestrator | 2026-03-16 01:05:14.611307 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-16 01:05:14.611314 | orchestrator | Monday 16 March 2026 01:02:31 +0000 (0:00:01.954) 0:01:34.802 ********** 2026-03-16 01:05:14.611321 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.611328 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.611364 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.611379 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.611386 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.611392 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.611399 | orchestrator | 2026-03-16 01:05:14.611406 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-16 01:05:14.611413 | orchestrator | Monday 16 March 2026 01:02:33 +0000 (0:00:02.159) 0:01:36.961 ********** 2026-03-16 01:05:14.611418 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.611424 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.611430 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.611437 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.611443 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.611450 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.611456 | orchestrator | 2026-03-16 01:05:14.611463 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-16 01:05:14.611468 | orchestrator | Monday 16 March 2026 01:02:35 +0000 (0:00:02.427) 0:01:39.388 ********** 2026-03-16 01:05:14.611475 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.611480 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.611486 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.611492 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.611497 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.611504 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.611510 | orchestrator | 2026-03-16 01:05:14.611516 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-16 01:05:14.611523 | orchestrator | Monday 16 March 2026 01:02:38 +0000 (0:00:02.086) 0:01:41.475 ********** 2026-03-16 01:05:14.611530 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-16 01:05:14.611538 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.611545 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-16 01:05:14.611552 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.611558 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-16 01:05:14.611565 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.611587 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-16 01:05:14.611594 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.611601 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-16 01:05:14.611608 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.611615 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-16 01:05:14.611621 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.611628 | orchestrator | 2026-03-16 01:05:14.611634 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-16 01:05:14.611640 | orchestrator | Monday 16 March 2026 01:02:40 +0000 (0:00:02.007) 0:01:43.483 ********** 2026-03-16 01:05:14.611648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.611656 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.611671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.611678 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.611691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.611698 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.611704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.611719 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.611726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.611733 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.611739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.611746 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.611752 | orchestrator | 2026-03-16 01:05:14.611758 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-16 01:05:14.611764 | orchestrator | Monday 16 March 2026 01:02:42 +0000 (0:00:02.036) 0:01:45.520 ********** 2026-03-16 01:05:14.611781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.611790 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.611797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.611810 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.611817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.611823 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.611830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.611836 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.611902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.611939 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.611952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.611959 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.611973 | orchestrator | 2026-03-16 01:05:14.611979 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-16 01:05:14.611984 | orchestrator | Monday 16 March 2026 01:02:44 +0000 (0:00:01.894) 0:01:47.414 ********** 2026-03-16 01:05:14.611991 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.611996 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612002 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612008 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.612014 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.612019 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.612025 | orchestrator | 2026-03-16 01:05:14.612031 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-16 01:05:14.612037 | orchestrator | Monday 16 March 2026 01:02:45 +0000 (0:00:01.817) 0:01:49.232 ********** 2026-03-16 01:05:14.612043 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612049 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612055 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.612060 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:05:14.612065 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:05:14.612072 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:05:14.612078 | orchestrator | 2026-03-16 01:05:14.612084 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-16 01:05:14.612090 | orchestrator | Monday 16 March 2026 01:02:50 +0000 (0:00:04.694) 0:01:53.926 ********** 2026-03-16 01:05:14.612097 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612102 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.612109 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612115 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.612121 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.612126 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.612133 | orchestrator | 2026-03-16 01:05:14.612139 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-16 01:05:14.612145 | orchestrator | Monday 16 March 2026 01:02:52 +0000 (0:00:01.770) 0:01:55.697 ********** 2026-03-16 01:05:14.612152 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612158 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612165 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.612172 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.612178 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.612185 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.612191 | orchestrator | 2026-03-16 01:05:14.612198 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-16 01:05:14.612204 | orchestrator | Monday 16 March 2026 01:02:54 +0000 (0:00:01.873) 0:01:57.571 ********** 2026-03-16 01:05:14.612211 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612217 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612224 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.612231 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.612237 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.612244 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.612251 | orchestrator | 2026-03-16 01:05:14.612257 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-16 01:05:14.612263 | orchestrator | Monday 16 March 2026 01:02:57 +0000 (0:00:03.368) 0:02:00.939 ********** 2026-03-16 01:05:14.612270 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612277 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612283 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.612289 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.612296 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.612302 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.612309 | orchestrator | 2026-03-16 01:05:14.612316 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-16 01:05:14.612323 | orchestrator | Monday 16 March 2026 01:03:00 +0000 (0:00:03.090) 0:02:04.029 ********** 2026-03-16 01:05:14.612362 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612370 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612376 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.612382 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.612388 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.612394 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.612400 | orchestrator | 2026-03-16 01:05:14.612406 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-16 01:05:14.612412 | orchestrator | Monday 16 March 2026 01:03:03 +0000 (0:00:02.518) 0:02:06.548 ********** 2026-03-16 01:05:14.612416 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.612420 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612423 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.612427 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612431 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.612435 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.612439 | orchestrator | 2026-03-16 01:05:14.612443 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-16 01:05:14.612458 | orchestrator | Monday 16 March 2026 01:03:05 +0000 (0:00:02.417) 0:02:08.965 ********** 2026-03-16 01:05:14.612464 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612471 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612477 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.612484 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.612490 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.612496 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.612503 | orchestrator | 2026-03-16 01:05:14.612509 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-16 01:05:14.612515 | orchestrator | Monday 16 March 2026 01:03:08 +0000 (0:00:03.045) 0:02:12.010 ********** 2026-03-16 01:05:14.612529 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-16 01:05:14.612535 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.612541 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-16 01:05:14.612547 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612553 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-16 01:05:14.612563 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612570 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-16 01:05:14.612576 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.612582 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-16 01:05:14.612588 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.612594 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-16 01:05:14.612601 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.612607 | orchestrator | 2026-03-16 01:05:14.612614 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-16 01:05:14.612619 | orchestrator | Monday 16 March 2026 01:03:11 +0000 (0:00:02.399) 0:02:14.410 ********** 2026-03-16 01:05:14.612628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.612642 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.612651 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.612669 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.612680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-16 01:05:14.612687 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.612693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.612705 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.612711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-16 01:05:14.612718 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.612724 | orchestrator | 2026-03-16 01:05:14.612729 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-16 01:05:14.612735 | orchestrator | Monday 16 March 2026 01:03:12 +0000 (0:00:01.880) 0:02:16.291 ********** 2026-03-16 01:05:14.612741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.612760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.612768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.612775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-16 01:05:14.612789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.612797 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-16 01:05:14.612801 | orchestrator | 2026-03-16 01:05:14.612805 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-16 01:05:14.612810 | orchestrator | Monday 16 March 2026 01:03:16 +0000 (0:00:03.831) 0:02:20.122 ********** 2026-03-16 01:05:14.612814 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:05:14.612818 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:05:14.612822 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:05:14.612826 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:05:14.612830 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:05:14.612837 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:05:14.612841 | orchestrator | 2026-03-16 01:05:14.612845 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-16 01:05:14.612849 | orchestrator | Monday 16 March 2026 01:03:17 +0000 (0:00:00.437) 0:02:20.560 ********** 2026-03-16 01:05:14.612853 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:14.612857 | orchestrator | 2026-03-16 01:05:14.612861 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-16 01:05:14.612865 | orchestrator | Monday 16 March 2026 01:03:19 +0000 (0:00:01.990) 0:02:22.551 ********** 2026-03-16 01:05:14.612868 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:14.612872 | orchestrator | 2026-03-16 01:05:14.612879 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-16 01:05:14.612883 | orchestrator | Monday 16 March 2026 01:03:21 +0000 (0:00:02.111) 0:02:24.662 ********** 2026-03-16 01:05:14.612887 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:14.612891 | orchestrator | 2026-03-16 01:05:14.612895 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-16 01:05:14.612899 | orchestrator | Monday 16 March 2026 01:03:58 +0000 (0:00:37.135) 0:03:01.798 ********** 2026-03-16 01:05:14.612910 | orchestrator | 2026-03-16 01:05:14.612914 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-16 01:05:14.612952 | orchestrator | Monday 16 March 2026 01:03:58 +0000 (0:00:00.059) 0:03:01.857 ********** 2026-03-16 01:05:14.612957 | orchestrator | 2026-03-16 01:05:14.612961 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-16 01:05:14.612965 | orchestrator | Monday 16 March 2026 01:03:58 +0000 (0:00:00.190) 0:03:02.047 ********** 2026-03-16 01:05:14.612969 | orchestrator | 2026-03-16 01:05:14.612973 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-16 01:05:14.612977 | orchestrator | Monday 16 March 2026 01:03:58 +0000 (0:00:00.061) 0:03:02.109 ********** 2026-03-16 01:05:14.612981 | orchestrator | 2026-03-16 01:05:14.612985 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-16 01:05:14.612989 | orchestrator | Monday 16 March 2026 01:03:58 +0000 (0:00:00.061) 0:03:02.170 ********** 2026-03-16 01:05:14.612993 | orchestrator | 2026-03-16 01:05:14.612997 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-16 01:05:14.613001 | orchestrator | Monday 16 March 2026 01:03:58 +0000 (0:00:00.062) 0:03:02.233 ********** 2026-03-16 01:05:14.613005 | orchestrator | 2026-03-16 01:05:14.613008 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-16 01:05:14.613012 | orchestrator | Monday 16 March 2026 01:03:58 +0000 (0:00:00.062) 0:03:02.296 ********** 2026-03-16 01:05:14.613016 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:05:14.613020 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:05:14.613024 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:05:14.613028 | orchestrator | 2026-03-16 01:05:14.613032 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-16 01:05:14.613035 | orchestrator | Monday 16 March 2026 01:04:22 +0000 (0:00:23.307) 0:03:25.604 ********** 2026-03-16 01:05:14.613039 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:05:14.613043 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:05:14.613047 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:05:14.613051 | orchestrator | 2026-03-16 01:05:14.613055 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:05:14.613060 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-16 01:05:14.613065 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-16 01:05:14.613069 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-16 01:05:14.613073 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-16 01:05:14.613077 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-16 01:05:14.613082 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-16 01:05:14.613088 | orchestrator | 2026-03-16 01:05:14.613095 | orchestrator | 2026-03-16 01:05:14.613103 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:05:14.613113 | orchestrator | Monday 16 March 2026 01:05:13 +0000 (0:00:50.872) 0:04:16.476 ********** 2026-03-16 01:05:14.613119 | orchestrator | =============================================================================== 2026-03-16 01:05:14.613125 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 50.87s 2026-03-16 01:05:14.613132 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 37.14s 2026-03-16 01:05:14.613138 | orchestrator | neutron : Restart neutron-server container ----------------------------- 23.31s 2026-03-16 01:05:14.613152 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.21s 2026-03-16 01:05:14.613159 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.21s 2026-03-16 01:05:14.613165 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.79s 2026-03-16 01:05:14.613173 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.17s 2026-03-16 01:05:14.613179 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.69s 2026-03-16 01:05:14.613191 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.15s 2026-03-16 01:05:14.613195 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.04s 2026-03-16 01:05:14.613199 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.95s 2026-03-16 01:05:14.613204 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.83s 2026-03-16 01:05:14.613208 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.81s 2026-03-16 01:05:14.613211 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.71s 2026-03-16 01:05:14.613220 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.58s 2026-03-16 01:05:14.613224 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.55s 2026-03-16 01:05:14.613228 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.37s 2026-03-16 01:05:14.613232 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.27s 2026-03-16 01:05:14.613235 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.09s 2026-03-16 01:05:14.613239 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.06s 2026-03-16 01:05:14.613243 | orchestrator | 2026-03-16 01:05:14 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:14.613248 | orchestrator | 2026-03-16 01:05:14 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:14.613252 | orchestrator | 2026-03-16 01:05:14 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:14.613256 | orchestrator | 2026-03-16 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:17.655583 | orchestrator | 2026-03-16 01:05:17 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:17.656867 | orchestrator | 2026-03-16 01:05:17 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:17.657568 | orchestrator | 2026-03-16 01:05:17 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:17.659126 | orchestrator | 2026-03-16 01:05:17 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:17.659154 | orchestrator | 2026-03-16 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:20.689059 | orchestrator | 2026-03-16 01:05:20 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:20.689640 | orchestrator | 2026-03-16 01:05:20 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:20.691141 | orchestrator | 2026-03-16 01:05:20 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:20.691812 | orchestrator | 2026-03-16 01:05:20 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:20.691832 | orchestrator | 2026-03-16 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:23.725143 | orchestrator | 2026-03-16 01:05:23 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:23.727483 | orchestrator | 2026-03-16 01:05:23 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:23.729545 | orchestrator | 2026-03-16 01:05:23 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:23.733045 | orchestrator | 2026-03-16 01:05:23 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:23.733456 | orchestrator | 2026-03-16 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:26.766130 | orchestrator | 2026-03-16 01:05:26 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:26.766666 | orchestrator | 2026-03-16 01:05:26 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:26.767532 | orchestrator | 2026-03-16 01:05:26 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:26.768187 | orchestrator | 2026-03-16 01:05:26 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:26.768216 | orchestrator | 2026-03-16 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:29.817920 | orchestrator | 2026-03-16 01:05:29 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:29.818817 | orchestrator | 2026-03-16 01:05:29 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:29.820100 | orchestrator | 2026-03-16 01:05:29 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:29.821073 | orchestrator | 2026-03-16 01:05:29 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:29.821108 | orchestrator | 2026-03-16 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:32.905636 | orchestrator | 2026-03-16 01:05:32 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:32.905807 | orchestrator | 2026-03-16 01:05:32 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:32.906672 | orchestrator | 2026-03-16 01:05:32 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:32.907421 | orchestrator | 2026-03-16 01:05:32 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:32.907457 | orchestrator | 2026-03-16 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:35.944568 | orchestrator | 2026-03-16 01:05:35 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:35.944861 | orchestrator | 2026-03-16 01:05:35 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:35.945863 | orchestrator | 2026-03-16 01:05:35 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:35.946697 | orchestrator | 2026-03-16 01:05:35 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:35.946745 | orchestrator | 2026-03-16 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:38.994171 | orchestrator | 2026-03-16 01:05:38 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:38.994365 | orchestrator | 2026-03-16 01:05:38 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:38.995003 | orchestrator | 2026-03-16 01:05:39 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:38.995522 | orchestrator | 2026-03-16 01:05:39 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:38.995561 | orchestrator | 2026-03-16 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:42.033692 | orchestrator | 2026-03-16 01:05:42 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:42.034617 | orchestrator | 2026-03-16 01:05:42 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:42.035923 | orchestrator | 2026-03-16 01:05:42 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:42.036952 | orchestrator | 2026-03-16 01:05:42 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:42.036970 | orchestrator | 2026-03-16 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:45.058357 | orchestrator | 2026-03-16 01:05:45 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:45.059024 | orchestrator | 2026-03-16 01:05:45 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:45.059598 | orchestrator | 2026-03-16 01:05:45 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:45.060411 | orchestrator | 2026-03-16 01:05:45 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:45.060435 | orchestrator | 2026-03-16 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:48.119019 | orchestrator | 2026-03-16 01:05:48 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:48.119136 | orchestrator | 2026-03-16 01:05:48 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:48.120332 | orchestrator | 2026-03-16 01:05:48 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:48.120955 | orchestrator | 2026-03-16 01:05:48 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:48.120982 | orchestrator | 2026-03-16 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:51.143968 | orchestrator | 2026-03-16 01:05:51 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:51.144935 | orchestrator | 2026-03-16 01:05:51 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:51.145721 | orchestrator | 2026-03-16 01:05:51 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:51.146390 | orchestrator | 2026-03-16 01:05:51 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:51.146421 | orchestrator | 2026-03-16 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:54.195346 | orchestrator | 2026-03-16 01:05:54 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:54.198752 | orchestrator | 2026-03-16 01:05:54 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:54.200046 | orchestrator | 2026-03-16 01:05:54 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:54.200116 | orchestrator | 2026-03-16 01:05:54 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:54.200170 | orchestrator | 2026-03-16 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:05:57.250673 | orchestrator | 2026-03-16 01:05:57 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:05:57.251454 | orchestrator | 2026-03-16 01:05:57 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:05:57.252282 | orchestrator | 2026-03-16 01:05:57 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:05:57.253092 | orchestrator | 2026-03-16 01:05:57 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:05:57.253119 | orchestrator | 2026-03-16 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:00.341054 | orchestrator | 2026-03-16 01:06:00 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:00.342125 | orchestrator | 2026-03-16 01:06:00 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:00.347934 | orchestrator | 2026-03-16 01:06:00 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:00.356879 | orchestrator | 2026-03-16 01:06:00 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:06:00.356934 | orchestrator | 2026-03-16 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:03.403353 | orchestrator | 2026-03-16 01:06:03 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:03.404206 | orchestrator | 2026-03-16 01:06:03 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:03.405118 | orchestrator | 2026-03-16 01:06:03 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:03.406264 | orchestrator | 2026-03-16 01:06:03 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:06:03.406293 | orchestrator | 2026-03-16 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:06.434803 | orchestrator | 2026-03-16 01:06:06 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:06.435508 | orchestrator | 2026-03-16 01:06:06 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:06.436342 | orchestrator | 2026-03-16 01:06:06 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:06.437454 | orchestrator | 2026-03-16 01:06:06 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:06:06.437477 | orchestrator | 2026-03-16 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:09.479831 | orchestrator | 2026-03-16 01:06:09 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:09.480851 | orchestrator | 2026-03-16 01:06:09 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:09.480905 | orchestrator | 2026-03-16 01:06:09 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:09.480912 | orchestrator | 2026-03-16 01:06:09 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:06:09.481778 | orchestrator | 2026-03-16 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:12.510677 | orchestrator | 2026-03-16 01:06:12 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:12.514879 | orchestrator | 2026-03-16 01:06:12 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:12.514964 | orchestrator | 2026-03-16 01:06:12 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:12.515611 | orchestrator | 2026-03-16 01:06:12 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:06:12.515661 | orchestrator | 2026-03-16 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:15.556079 | orchestrator | 2026-03-16 01:06:15 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:15.556913 | orchestrator | 2026-03-16 01:06:15 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:15.557910 | orchestrator | 2026-03-16 01:06:15 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:15.560184 | orchestrator | 2026-03-16 01:06:15 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:06:15.560247 | orchestrator | 2026-03-16 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:18.596849 | orchestrator | 2026-03-16 01:06:18 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:18.598970 | orchestrator | 2026-03-16 01:06:18 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:18.599675 | orchestrator | 2026-03-16 01:06:18 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:18.600641 | orchestrator | 2026-03-16 01:06:18 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:06:18.600667 | orchestrator | 2026-03-16 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:21.635735 | orchestrator | 2026-03-16 01:06:21 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:21.638443 | orchestrator | 2026-03-16 01:06:21 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:21.641897 | orchestrator | 2026-03-16 01:06:21 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:21.643429 | orchestrator | 2026-03-16 01:06:21 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state STARTED 2026-03-16 01:06:21.643582 | orchestrator | 2026-03-16 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:24.691612 | orchestrator | 2026-03-16 01:06:24 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:24.691699 | orchestrator | 2026-03-16 01:06:24 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:24.691705 | orchestrator | 2026-03-16 01:06:24 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:24.691710 | orchestrator | 2026-03-16 01:06:24 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:24.694872 | orchestrator | 2026-03-16 01:06:24 | INFO  | Task 1e84e9e3-46df-4ec1-8aca-a4890ebbd638 is in state SUCCESS 2026-03-16 01:06:24.696744 | orchestrator | 2026-03-16 01:06:24.696810 | orchestrator | 2026-03-16 01:06:24.696820 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:06:24.696828 | orchestrator | 2026-03-16 01:06:24.696834 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:06:24.696841 | orchestrator | Monday 16 March 2026 01:03:17 +0000 (0:00:00.217) 0:00:00.217 ********** 2026-03-16 01:06:24.696847 | orchestrator | ok: [testbed-manager] 2026-03-16 01:06:24.696855 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:06:24.696861 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:06:24.696867 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:06:24.696874 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:06:24.696880 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:06:24.696887 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:06:24.696893 | orchestrator | 2026-03-16 01:06:24.696900 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:06:24.696907 | orchestrator | Monday 16 March 2026 01:03:18 +0000 (0:00:00.619) 0:00:00.837 ********** 2026-03-16 01:06:24.696914 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-16 01:06:24.696921 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-16 01:06:24.696928 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-16 01:06:24.696959 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-16 01:06:24.696966 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-16 01:06:24.696972 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-16 01:06:24.696978 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-16 01:06:24.696985 | orchestrator | 2026-03-16 01:06:24.696991 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-16 01:06:24.696997 | orchestrator | 2026-03-16 01:06:24.697003 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-16 01:06:24.697010 | orchestrator | Monday 16 March 2026 01:03:19 +0000 (0:00:00.700) 0:00:01.537 ********** 2026-03-16 01:06:24.697019 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 01:06:24.697026 | orchestrator | 2026-03-16 01:06:24.697033 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-16 01:06:24.697039 | orchestrator | Monday 16 March 2026 01:03:20 +0000 (0:00:01.209) 0:00:02.746 ********** 2026-03-16 01:06:24.697062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697073 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-16 01:06:24.697081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697125 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697176 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697265 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697332 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-16 01:06:24.697348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697389 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697432 | orchestrator | 2026-03-16 01:06:24.697439 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-16 01:06:24.697446 | orchestrator | Monday 16 March 2026 01:03:23 +0000 (0:00:02.992) 0:00:05.739 ********** 2026-03-16 01:06:24.697452 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 01:06:24.697459 | orchestrator | 2026-03-16 01:06:24.697465 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-16 01:06:24.697471 | orchestrator | Monday 16 March 2026 01:03:24 +0000 (0:00:01.264) 0:00:07.004 ********** 2026-03-16 01:06:24.697481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697509 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-16 01:06:24.697516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697537 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.697543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697594 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.697805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697822 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697830 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-16 01:06:24.697843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.697864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.698110 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.698133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.698141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.698149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.698155 | orchestrator | 2026-03-16 01:06:24.698163 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-16 01:06:24.698170 | orchestrator | Monday 16 March 2026 01:03:29 +0000 (0:00:05.264) 0:00:12.269 ********** 2026-03-16 01:06:24.698206 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-16 01:06:24.698225 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698232 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698249 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-16 01:06:24.698256 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698270 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:06:24.698281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698446 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:06:24.698453 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:06:24.698459 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:06:24.698466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698493 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.698499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698532 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.698538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698562 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.698568 | orchestrator | 2026-03-16 01:06:24.698574 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-16 01:06:24.698580 | orchestrator | Monday 16 March 2026 01:03:31 +0000 (0:00:01.553) 0:00:13.822 ********** 2026-03-16 01:06:24.698589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698624 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:06:24.698630 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-16 01:06:24.698641 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698652 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698659 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-16 01:06:24.698669 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698675 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:06:24.698682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698719 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:06:24.698725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-16 01:06:24.698818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698826 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:06:24.698837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698852 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.698859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.698866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.698876 | orchestrator | skip2026-03-16 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:24.699208 | orchestrator | ping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.699240 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.699248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-16 01:06:24.699254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.699266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-16 01:06:24.699271 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.699274 | orchestrator | 2026-03-16 01:06:24.699278 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-16 01:06:24.699283 | orchestrator | Monday 16 March 2026 01:03:33 +0000 (0:00:02.505) 0:00:16.328 ********** 2026-03-16 01:06:24.699287 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-16 01:06:24.699291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.699300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.699308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.699312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.699316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.699323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.699327 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.699331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.699335 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.699342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.699349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.699354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.699358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.699364 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.699368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.699372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.699376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.699387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.699391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.699395 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-16 01:06:24.699402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.699406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.699410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.699416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.699424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.699428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.699432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.699438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.699442 | orchestrator | 2026-03-16 01:06:24.699446 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-16 01:06:24.699450 | orchestrator | Monday 16 March 2026 01:03:39 +0000 (0:00:05.366) 0:00:21.695 ********** 2026-03-16 01:06:24.699454 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:06:24.699458 | orchestrator | 2026-03-16 01:06:24.699462 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-16 01:06:24.699466 | orchestrator | Monday 16 March 2026 01:03:40 +0000 (0:00:01.176) 0:00:22.872 ********** 2026-03-16 01:06:24.699470 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083191, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9262605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699476 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083191, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9262605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699505 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083191, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9262605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699510 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083220, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9319077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699514 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083191, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9262605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699518 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083191, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9262605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699542 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083191, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9262605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.699547 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083191, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9262605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699554 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083220, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9319077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699561 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083181, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9250646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699565 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083220, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9319077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699569 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083220, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9319077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699573 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083220, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9319077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699580 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083220, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9319077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699584 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083211, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9297652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699594 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083181, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9250646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699603 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083181, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9250646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699609 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083181, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9250646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699620 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083181, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9250646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699630 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083211, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9297652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699639 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083174, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9234262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699645 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083211, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9297652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699657 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083211, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9297652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699668 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083181, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9250646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699674 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083174, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9234262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699680 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083211, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9297652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699686 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083220, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9319077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.699695 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083194, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9265854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699701 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083174, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9234262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699711 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083174, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9234262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699721 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083211, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9297652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699728 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083209, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9288428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699733 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083194, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9265854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699740 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083174, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9234262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699750 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083194, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9265854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699760 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083198, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9274106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699766 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083194, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9265854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699773 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083174, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9234262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699783 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083181, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9250646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.699790 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083194, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9265854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699797 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083209, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9288428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699806 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083209, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9288428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699816 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083194, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9265854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699823 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083209, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9288428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699830 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083189, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9257655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699841 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083209, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9288428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699846 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083198, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9274106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699852 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083198, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9274106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699862 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083198, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9274106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699873 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083209, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9288428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699880 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083198, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9274106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699887 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083189, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9257655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699897 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083219, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.931581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699905 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083189, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9257655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.699912 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083189, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9257655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700033 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083189, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9257655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700038 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083211, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9297652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700043 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083198, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9274106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700048 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083219, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.931581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700057 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083219, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.931581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700061 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083169, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9205534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700066 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083219, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.931581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700076 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083189, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9257655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700081 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083219, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.931581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700085 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083219, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.931581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700090 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083169, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9205534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700097 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083169, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9205534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700103 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083169, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9205534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700107 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083169, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9205534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700117 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083239, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9355311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700122 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083239, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9355311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700127 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083174, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9234262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700131 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083169, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9205534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700139 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083239, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9355311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700144 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083239, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9355311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700149 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083239, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9355311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700158 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083218, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9309113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700164 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083218, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9309113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700168 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083218, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9309113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700173 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083179, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.923993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700333 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083218, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9309113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700343 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083239, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9355311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700351 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083218, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9309113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700359 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083218, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9309113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700363 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083172, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9218907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700367 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083179, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.923993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700371 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083179, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.923993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700379 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083179, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.923993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700383 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083179, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.923993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700392 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083194, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9265854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700399 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083179, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.923993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700403 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083207, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9283624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700409 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083172, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9218907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700415 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083172, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9218907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700426 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083172, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9218907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700469 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083172, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9218907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700481 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083207, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9283624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700490 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083207, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9283624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700498 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083204, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9278076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700504 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083172, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9218907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700510 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083207, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9283624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700522 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083204, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9278076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700532 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083207, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9283624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700538 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083204, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9278076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700545 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083204, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9278076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700557 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083207, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9283624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700563 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083231, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9336858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700569 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:06:24.700574 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083204, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9278076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700581 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083209, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9288428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700588 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083204, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9278076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700592 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083231, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9336858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700596 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.700603 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083231, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9336858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700606 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.700610 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083231, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9336858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700614 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:06:24.700618 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083231, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9336858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700622 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.700626 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083231, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9336858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-16 01:06:24.700633 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:06:24.700640 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083198, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9274106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700644 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083189, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9257655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700648 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083219, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.931581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700654 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083169, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9205534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700658 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083239, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9355311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700662 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083218, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9309113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700667 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083179, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.923993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700680 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083172, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9218907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700685 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083207, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9283624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700688 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083204, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9278076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700695 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083231, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9336858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-16 01:06:24.700698 | orchestrator | 2026-03-16 01:06:24.700702 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-16 01:06:24.700707 | orchestrator | Monday 16 March 2026 01:04:09 +0000 (0:00:28.694) 0:00:51.566 ********** 2026-03-16 01:06:24.700710 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:06:24.700714 | orchestrator | 2026-03-16 01:06:24.700718 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-16 01:06:24.700722 | orchestrator | Monday 16 March 2026 01:04:09 +0000 (0:00:00.680) 0:00:52.246 ********** 2026-03-16 01:06:24.700726 | orchestrator | [WARNING]: Skipped 2026-03-16 01:06:24.700730 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700734 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-16 01:06:24.700738 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700742 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-16 01:06:24.700746 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 01:06:24.700750 | orchestrator | [WARNING]: Skipped 2026-03-16 01:06:24.700754 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700758 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-16 01:06:24.700768 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700775 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-16 01:06:24.700781 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:06:24.700787 | orchestrator | [WARNING]: Skipped 2026-03-16 01:06:24.700793 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700798 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-16 01:06:24.700804 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700810 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-16 01:06:24.700815 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-16 01:06:24.700821 | orchestrator | [WARNING]: Skipped 2026-03-16 01:06:24.700827 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700832 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-16 01:06:24.700838 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700845 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-16 01:06:24.700851 | orchestrator | [WARNING]: Skipped 2026-03-16 01:06:24.700856 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700866 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-16 01:06:24.700873 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700880 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-16 01:06:24.700886 | orchestrator | [WARNING]: Skipped 2026-03-16 01:06:24.700892 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700898 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-16 01:06:24.700904 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700910 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-16 01:06:24.700916 | orchestrator | [WARNING]: Skipped 2026-03-16 01:06:24.700922 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700929 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-16 01:06:24.700933 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-16 01:06:24.700937 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-16 01:06:24.700941 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-16 01:06:24.700944 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-16 01:06:24.700948 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-16 01:06:24.700952 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-16 01:06:24.700956 | orchestrator | 2026-03-16 01:06:24.700960 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-16 01:06:24.700964 | orchestrator | Monday 16 March 2026 01:04:11 +0000 (0:00:01.715) 0:00:53.962 ********** 2026-03-16 01:06:24.700968 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-16 01:06:24.700973 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:06:24.700978 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-16 01:06:24.700982 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:06:24.700987 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-16 01:06:24.700991 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:06:24.700996 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-16 01:06:24.701001 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.701010 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-16 01:06:24.701014 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.701019 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-16 01:06:24.701023 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.701031 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-16 01:06:24.701036 | orchestrator | 2026-03-16 01:06:24.701040 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-16 01:06:24.701045 | orchestrator | Monday 16 March 2026 01:04:28 +0000 (0:00:16.425) 0:01:10.388 ********** 2026-03-16 01:06:24.701049 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-16 01:06:24.701053 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:06:24.701058 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-16 01:06:24.701062 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.701066 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-16 01:06:24.701071 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:06:24.701075 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-16 01:06:24.701080 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:06:24.701084 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-16 01:06:24.701088 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.701093 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-16 01:06:24.701097 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.701101 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-16 01:06:24.701106 | orchestrator | 2026-03-16 01:06:24.701110 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-16 01:06:24.701115 | orchestrator | Monday 16 March 2026 01:04:31 +0000 (0:00:03.436) 0:01:13.824 ********** 2026-03-16 01:06:24.701119 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-16 01:06:24.701124 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-16 01:06:24.701129 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:06:24.701133 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-16 01:06:24.701138 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:06:24.701142 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.701291 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-16 01:06:24.701300 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.701304 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-16 01:06:24.701309 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:06:24.701314 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-16 01:06:24.701318 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.701323 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-16 01:06:24.701327 | orchestrator | 2026-03-16 01:06:24.701331 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-16 01:06:24.701339 | orchestrator | Monday 16 March 2026 01:04:34 +0000 (0:00:02.894) 0:01:16.719 ********** 2026-03-16 01:06:24.701343 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:06:24.701347 | orchestrator | 2026-03-16 01:06:24.701351 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-16 01:06:24.701355 | orchestrator | Monday 16 March 2026 01:04:35 +0000 (0:00:00.757) 0:01:17.476 ********** 2026-03-16 01:06:24.701359 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:06:24.701362 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:06:24.701366 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:06:24.701370 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:06:24.701374 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.701377 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.701381 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.701385 | orchestrator | 2026-03-16 01:06:24.701389 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-16 01:06:24.701393 | orchestrator | Monday 16 March 2026 01:04:35 +0000 (0:00:00.626) 0:01:18.103 ********** 2026-03-16 01:06:24.701396 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:06:24.701400 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.701404 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.701408 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.701412 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:06:24.701416 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:06:24.701420 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:06:24.701423 | orchestrator | 2026-03-16 01:06:24.701427 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-16 01:06:24.701431 | orchestrator | Monday 16 March 2026 01:04:37 +0000 (0:00:02.152) 0:01:20.255 ********** 2026-03-16 01:06:24.701435 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-16 01:06:24.701440 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-16 01:06:24.701443 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:06:24.701451 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-16 01:06:24.701455 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:06:24.701459 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:06:24.701463 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-16 01:06:24.701466 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:06:24.701470 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-16 01:06:24.701474 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.701478 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-16 01:06:24.701482 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.701485 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-16 01:06:24.701489 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.701493 | orchestrator | 2026-03-16 01:06:24.701497 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-16 01:06:24.701500 | orchestrator | Monday 16 March 2026 01:04:39 +0000 (0:00:01.725) 0:01:21.980 ********** 2026-03-16 01:06:24.701504 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-16 01:06:24.701508 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-16 01:06:24.701513 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:06:24.701516 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-16 01:06:24.701520 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:06:24.701529 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:06:24.701536 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-16 01:06:24.701542 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.701549 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-16 01:06:24.701556 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.701562 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-16 01:06:24.701568 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.701575 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-16 01:06:24.701581 | orchestrator | 2026-03-16 01:06:24.701587 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-16 01:06:24.701598 | orchestrator | Monday 16 March 2026 01:04:41 +0000 (0:00:01.432) 0:01:23.413 ********** 2026-03-16 01:06:24.701605 | orchestrator | [WARNING]: Skipped 2026-03-16 01:06:24.701612 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-16 01:06:24.701618 | orchestrator | due to this access issue: 2026-03-16 01:06:24.701624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-16 01:06:24.701631 | orchestrator | not a directory 2026-03-16 01:06:24.701638 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:06:24.701644 | orchestrator | 2026-03-16 01:06:24.701650 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-16 01:06:24.701657 | orchestrator | Monday 16 March 2026 01:04:42 +0000 (0:00:01.076) 0:01:24.489 ********** 2026-03-16 01:06:24.701663 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:06:24.701669 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:06:24.701676 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:06:24.701683 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:06:24.701687 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.701691 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.701695 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.701699 | orchestrator | 2026-03-16 01:06:24.701703 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-16 01:06:24.701707 | orchestrator | Monday 16 March 2026 01:04:42 +0000 (0:00:00.865) 0:01:25.355 ********** 2026-03-16 01:06:24.701710 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:06:24.701714 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:06:24.701718 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:06:24.701721 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:06:24.701725 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:06:24.701729 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:06:24.701732 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:06:24.701736 | orchestrator | 2026-03-16 01:06:24.701740 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-16 01:06:24.701743 | orchestrator | Monday 16 March 2026 01:04:43 +0000 (0:00:00.890) 0:01:26.246 ********** 2026-03-16 01:06:24.701751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-16 01:06:24.701760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.701765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.701770 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.701778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.701782 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.701787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.701791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.701800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.701807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-16 01:06:24.701811 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.701816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.701822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.701827 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.701831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.701835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.701844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.701848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.701852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.701859 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-16 01:06:24.701864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.701868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.701872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.701882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.701886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.701890 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.701897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-16 01:06:24.701901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.701905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-16 01:06:24.701909 | orchestrator | 2026-03-16 01:06:24.701913 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-16 01:06:24.701917 | orchestrator | Monday 16 March 2026 01:04:47 +0000 (0:00:03.919) 0:01:30.165 ********** 2026-03-16 01:06:24.701924 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-16 01:06:24.701928 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:06:24.701932 | orchestrator | 2026-03-16 01:06:24.701936 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-16 01:06:24.701940 | orchestrator | Monday 16 March 2026 01:04:48 +0000 (0:00:01.193) 0:01:31.358 ********** 2026-03-16 01:06:24.701944 | orchestrator | 2026-03-16 01:06:24.701948 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-16 01:06:24.701952 | orchestrator | Monday 16 March 2026 01:04:49 +0000 (0:00:00.071) 0:01:31.430 ********** 2026-03-16 01:06:24.701955 | orchestrator | 2026-03-16 01:06:24.701960 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-16 01:06:24.701964 | orchestrator | Monday 16 March 2026 01:04:49 +0000 (0:00:00.065) 0:01:31.495 ********** 2026-03-16 01:06:24.701967 | orchestrator | 2026-03-16 01:06:24.701971 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-16 01:06:24.701975 | orchestrator | Monday 16 March 2026 01:04:49 +0000 (0:00:00.068) 0:01:31.564 ********** 2026-03-16 01:06:24.701979 | orchestrator | 2026-03-16 01:06:24.701985 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-16 01:06:24.701989 | orchestrator | Monday 16 March 2026 01:04:49 +0000 (0:00:00.228) 0:01:31.793 ********** 2026-03-16 01:06:24.701993 | orchestrator | 2026-03-16 01:06:24.701996 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-16 01:06:24.702000 | orchestrator | Monday 16 March 2026 01:04:49 +0000 (0:00:00.062) 0:01:31.856 ********** 2026-03-16 01:06:24.702004 | orchestrator | 2026-03-16 01:06:24.702008 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-16 01:06:24.702034 | orchestrator | Monday 16 March 2026 01:04:49 +0000 (0:00:00.063) 0:01:31.919 ********** 2026-03-16 01:06:24.702040 | orchestrator | 2026-03-16 01:06:24.702044 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-16 01:06:24.702048 | orchestrator | Monday 16 March 2026 01:04:49 +0000 (0:00:00.085) 0:01:32.004 ********** 2026-03-16 01:06:24.702052 | orchestrator | changed: [testbed-manager] 2026-03-16 01:06:24.702056 | orchestrator | 2026-03-16 01:06:24.702060 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-16 01:06:24.702064 | orchestrator | Monday 16 March 2026 01:05:05 +0000 (0:00:15.711) 0:01:47.715 ********** 2026-03-16 01:06:24.702068 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:06:24.702072 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:06:24.702076 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:06:24.702079 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:06:24.702083 | orchestrator | changed: [testbed-manager] 2026-03-16 01:06:24.702087 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:06:24.702091 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:06:24.702095 | orchestrator | 2026-03-16 01:06:24.702099 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-16 01:06:24.702103 | orchestrator | Monday 16 March 2026 01:05:19 +0000 (0:00:13.667) 0:02:01.383 ********** 2026-03-16 01:06:24.702107 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:06:24.702111 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:06:24.702115 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:06:24.702120 | orchestrator | 2026-03-16 01:06:24.702125 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-16 01:06:24.702131 | orchestrator | Monday 16 March 2026 01:05:25 +0000 (0:00:06.044) 0:02:07.427 ********** 2026-03-16 01:06:24.702137 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:06:24.702143 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:06:24.702149 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:06:24.702155 | orchestrator | 2026-03-16 01:06:24.702161 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-16 01:06:24.702167 | orchestrator | Monday 16 March 2026 01:05:34 +0000 (0:00:09.945) 0:02:17.373 ********** 2026-03-16 01:06:24.702179 | orchestrator | changed: [testbed-manager] 2026-03-16 01:06:24.702205 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:06:24.702209 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:06:24.702213 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:06:24.702217 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:06:24.702226 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:06:24.702230 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:06:24.702234 | orchestrator | 2026-03-16 01:06:24.702238 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-16 01:06:24.702242 | orchestrator | Monday 16 March 2026 01:05:48 +0000 (0:00:13.641) 0:02:31.015 ********** 2026-03-16 01:06:24.702246 | orchestrator | changed: [testbed-manager] 2026-03-16 01:06:24.702250 | orchestrator | 2026-03-16 01:06:24.702254 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-16 01:06:24.702258 | orchestrator | Monday 16 March 2026 01:05:56 +0000 (0:00:07.704) 0:02:38.719 ********** 2026-03-16 01:06:24.702262 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:06:24.702266 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:06:24.702269 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:06:24.702273 | orchestrator | 2026-03-16 01:06:24.702277 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-16 01:06:24.702281 | orchestrator | Monday 16 March 2026 01:06:07 +0000 (0:00:11.436) 0:02:50.156 ********** 2026-03-16 01:06:24.702285 | orchestrator | changed: [testbed-manager] 2026-03-16 01:06:24.702289 | orchestrator | 2026-03-16 01:06:24.702293 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-16 01:06:24.702296 | orchestrator | Monday 16 March 2026 01:06:12 +0000 (0:00:04.582) 0:02:54.738 ********** 2026-03-16 01:06:24.702300 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:06:24.702304 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:06:24.702308 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:06:24.702312 | orchestrator | 2026-03-16 01:06:24.702316 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:06:24.702320 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-16 01:06:24.702325 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-16 01:06:24.702329 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-16 01:06:24.702333 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-16 01:06:24.702337 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-16 01:06:24.702340 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-16 01:06:24.702348 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-16 01:06:24.702353 | orchestrator | 2026-03-16 01:06:24.702356 | orchestrator | 2026-03-16 01:06:24.702360 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:06:24.702364 | orchestrator | Monday 16 March 2026 01:06:23 +0000 (0:00:10.668) 0:03:05.406 ********** 2026-03-16 01:06:24.702368 | orchestrator | =============================================================================== 2026-03-16 01:06:24.702372 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.69s 2026-03-16 01:06:24.702376 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.43s 2026-03-16 01:06:24.702385 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.71s 2026-03-16 01:06:24.702389 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.67s 2026-03-16 01:06:24.702393 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.64s 2026-03-16 01:06:24.702396 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.44s 2026-03-16 01:06:24.702400 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.67s 2026-03-16 01:06:24.702404 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.95s 2026-03-16 01:06:24.702408 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.71s 2026-03-16 01:06:24.702412 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.04s 2026-03-16 01:06:24.702416 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.37s 2026-03-16 01:06:24.702420 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.26s 2026-03-16 01:06:24.702424 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.58s 2026-03-16 01:06:24.702427 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.92s 2026-03-16 01:06:24.702431 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.44s 2026-03-16 01:06:24.702435 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.99s 2026-03-16 01:06:24.702439 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.90s 2026-03-16 01:06:24.702443 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.51s 2026-03-16 01:06:24.702447 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.15s 2026-03-16 01:06:24.702451 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.73s 2026-03-16 01:06:27.739761 | orchestrator | 2026-03-16 01:06:27 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:27.742267 | orchestrator | 2026-03-16 01:06:27 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:27.744321 | orchestrator | 2026-03-16 01:06:27 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:27.745294 | orchestrator | 2026-03-16 01:06:27 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:27.745333 | orchestrator | 2026-03-16 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:30.792570 | orchestrator | 2026-03-16 01:06:30 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:30.793326 | orchestrator | 2026-03-16 01:06:30 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:30.795244 | orchestrator | 2026-03-16 01:06:30 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:30.796769 | orchestrator | 2026-03-16 01:06:30 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:30.796824 | orchestrator | 2026-03-16 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:33.839434 | orchestrator | 2026-03-16 01:06:33 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:33.841670 | orchestrator | 2026-03-16 01:06:33 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:33.846249 | orchestrator | 2026-03-16 01:06:33 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:33.851641 | orchestrator | 2026-03-16 01:06:33 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:33.851707 | orchestrator | 2026-03-16 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:36.898371 | orchestrator | 2026-03-16 01:06:36 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:36.899367 | orchestrator | 2026-03-16 01:06:36 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:36.900469 | orchestrator | 2026-03-16 01:06:36 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:36.901639 | orchestrator | 2026-03-16 01:06:36 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:36.901658 | orchestrator | 2026-03-16 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:39.944274 | orchestrator | 2026-03-16 01:06:39 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:39.946318 | orchestrator | 2026-03-16 01:06:39 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:39.948056 | orchestrator | 2026-03-16 01:06:39 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:39.949404 | orchestrator | 2026-03-16 01:06:39 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:39.949440 | orchestrator | 2026-03-16 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:42.996443 | orchestrator | 2026-03-16 01:06:43 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:42.999414 | orchestrator | 2026-03-16 01:06:43 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:43.001159 | orchestrator | 2026-03-16 01:06:43 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:43.002893 | orchestrator | 2026-03-16 01:06:43 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:43.003244 | orchestrator | 2026-03-16 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:46.047308 | orchestrator | 2026-03-16 01:06:46 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:46.048536 | orchestrator | 2026-03-16 01:06:46 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:46.049842 | orchestrator | 2026-03-16 01:06:46 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:46.051704 | orchestrator | 2026-03-16 01:06:46 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:46.051738 | orchestrator | 2026-03-16 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:49.097370 | orchestrator | 2026-03-16 01:06:49 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:49.098396 | orchestrator | 2026-03-16 01:06:49 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:49.099897 | orchestrator | 2026-03-16 01:06:49 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:49.101281 | orchestrator | 2026-03-16 01:06:49 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:49.101374 | orchestrator | 2026-03-16 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:52.135187 | orchestrator | 2026-03-16 01:06:52 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:52.136205 | orchestrator | 2026-03-16 01:06:52 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:52.136899 | orchestrator | 2026-03-16 01:06:52 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:52.138094 | orchestrator | 2026-03-16 01:06:52 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:52.138258 | orchestrator | 2026-03-16 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:55.180176 | orchestrator | 2026-03-16 01:06:55 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:55.183108 | orchestrator | 2026-03-16 01:06:55 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:55.183273 | orchestrator | 2026-03-16 01:06:55 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:55.183942 | orchestrator | 2026-03-16 01:06:55 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:55.184001 | orchestrator | 2026-03-16 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:06:58.207774 | orchestrator | 2026-03-16 01:06:58 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:06:58.207985 | orchestrator | 2026-03-16 01:06:58 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:06:58.208927 | orchestrator | 2026-03-16 01:06:58 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:06:58.210544 | orchestrator | 2026-03-16 01:06:58 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:06:58.210628 | orchestrator | 2026-03-16 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:01.267352 | orchestrator | 2026-03-16 01:07:01 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:01.270505 | orchestrator | 2026-03-16 01:07:01 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:07:01.272065 | orchestrator | 2026-03-16 01:07:01 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:01.276000 | orchestrator | 2026-03-16 01:07:01 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:01.276081 | orchestrator | 2026-03-16 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:04.304827 | orchestrator | 2026-03-16 01:07:04 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:04.305010 | orchestrator | 2026-03-16 01:07:04 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:07:04.305633 | orchestrator | 2026-03-16 01:07:04 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:04.306498 | orchestrator | 2026-03-16 01:07:04 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:04.306596 | orchestrator | 2026-03-16 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:07.350840 | orchestrator | 2026-03-16 01:07:07 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:07.354424 | orchestrator | 2026-03-16 01:07:07 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:07:07.356737 | orchestrator | 2026-03-16 01:07:07 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:07.359243 | orchestrator | 2026-03-16 01:07:07 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:07.359316 | orchestrator | 2026-03-16 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:10.403740 | orchestrator | 2026-03-16 01:07:10 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:10.404960 | orchestrator | 2026-03-16 01:07:10 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:07:10.406390 | orchestrator | 2026-03-16 01:07:10 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:10.407771 | orchestrator | 2026-03-16 01:07:10 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:10.407805 | orchestrator | 2026-03-16 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:13.453741 | orchestrator | 2026-03-16 01:07:13 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:13.456065 | orchestrator | 2026-03-16 01:07:13 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state STARTED 2026-03-16 01:07:13.458588 | orchestrator | 2026-03-16 01:07:13 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:13.459802 | orchestrator | 2026-03-16 01:07:13 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:13.459846 | orchestrator | 2026-03-16 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:16.509297 | orchestrator | 2026-03-16 01:07:16 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:16.512062 | orchestrator | 2026-03-16 01:07:16.512143 | orchestrator | 2026-03-16 01:07:16 | INFO  | Task b859e38e-dd67-46e0-af66-e713f10b74a2 is in state SUCCESS 2026-03-16 01:07:16.513489 | orchestrator | 2026-03-16 01:07:16.513546 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:07:16.513558 | orchestrator | 2026-03-16 01:07:16.513566 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:07:16.513575 | orchestrator | Monday 16 March 2026 01:04:29 +0000 (0:00:00.236) 0:00:00.236 ********** 2026-03-16 01:07:16.513584 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:07:16.513593 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:07:16.513601 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:07:16.513609 | orchestrator | 2026-03-16 01:07:16.513617 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:07:16.513626 | orchestrator | Monday 16 March 2026 01:04:30 +0000 (0:00:00.300) 0:00:00.536 ********** 2026-03-16 01:07:16.513634 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-16 01:07:16.513643 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-16 01:07:16.513653 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-16 01:07:16.513662 | orchestrator | 2026-03-16 01:07:16.513671 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-16 01:07:16.513680 | orchestrator | 2026-03-16 01:07:16.513701 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-16 01:07:16.513710 | orchestrator | Monday 16 March 2026 01:04:30 +0000 (0:00:00.416) 0:00:00.952 ********** 2026-03-16 01:07:16.513719 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:07:16.513729 | orchestrator | 2026-03-16 01:07:16.513738 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-16 01:07:16.513747 | orchestrator | Monday 16 March 2026 01:04:31 +0000 (0:00:00.511) 0:00:01.463 ********** 2026-03-16 01:07:16.513756 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-16 01:07:16.513764 | orchestrator | 2026-03-16 01:07:16.513773 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-16 01:07:16.513782 | orchestrator | Monday 16 March 2026 01:04:34 +0000 (0:00:03.483) 0:00:04.947 ********** 2026-03-16 01:07:16.513791 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-16 01:07:16.513800 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-16 01:07:16.513810 | orchestrator | 2026-03-16 01:07:16.513819 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-16 01:07:16.513847 | orchestrator | Monday 16 March 2026 01:04:41 +0000 (0:00:07.021) 0:00:11.968 ********** 2026-03-16 01:07:16.513856 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-16 01:07:16.513865 | orchestrator | 2026-03-16 01:07:16.513874 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-16 01:07:16.513884 | orchestrator | Monday 16 March 2026 01:04:45 +0000 (0:00:03.687) 0:00:15.655 ********** 2026-03-16 01:07:16.513892 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-16 01:07:16.513901 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-16 01:07:16.513910 | orchestrator | 2026-03-16 01:07:16.513919 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-16 01:07:16.513928 | orchestrator | Monday 16 March 2026 01:04:48 +0000 (0:00:03.399) 0:00:19.055 ********** 2026-03-16 01:07:16.513937 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-16 01:07:16.513945 | orchestrator | 2026-03-16 01:07:16.513955 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-16 01:07:16.513964 | orchestrator | Monday 16 March 2026 01:04:51 +0000 (0:00:03.111) 0:00:22.166 ********** 2026-03-16 01:07:16.513973 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-16 01:07:16.513982 | orchestrator | 2026-03-16 01:07:16.513991 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-16 01:07:16.514000 | orchestrator | Monday 16 March 2026 01:04:56 +0000 (0:00:04.160) 0:00:26.327 ********** 2026-03-16 01:07:16.514095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.514119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.514139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.514151 | orchestrator | 2026-03-16 01:07:16.514161 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-16 01:07:16.514171 | orchestrator | Monday 16 March 2026 01:04:59 +0000 (0:00:03.853) 0:00:30.181 ********** 2026-03-16 01:07:16.514182 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:07:16.514192 | orchestrator | 2026-03-16 01:07:16.514241 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-16 01:07:16.514252 | orchestrator | Monday 16 March 2026 01:05:00 +0000 (0:00:00.714) 0:00:30.895 ********** 2026-03-16 01:07:16.514263 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:16.514272 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:07:16.514281 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:07:16.514293 | orchestrator | 2026-03-16 01:07:16.514308 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-16 01:07:16.514322 | orchestrator | Monday 16 March 2026 01:05:05 +0000 (0:00:05.104) 0:00:35.999 ********** 2026-03-16 01:07:16.514337 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-16 01:07:16.514352 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-16 01:07:16.514375 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-16 01:07:16.514390 | orchestrator | 2026-03-16 01:07:16.514405 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-16 01:07:16.514426 | orchestrator | Monday 16 March 2026 01:05:08 +0000 (0:00:02.985) 0:00:38.985 ********** 2026-03-16 01:07:16.514441 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-16 01:07:16.514456 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-16 01:07:16.514471 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-16 01:07:16.514486 | orchestrator | 2026-03-16 01:07:16.514495 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-16 01:07:16.514504 | orchestrator | Monday 16 March 2026 01:05:10 +0000 (0:00:01.854) 0:00:40.840 ********** 2026-03-16 01:07:16.514512 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:07:16.514521 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:07:16.514530 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:07:16.514538 | orchestrator | 2026-03-16 01:07:16.514547 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-16 01:07:16.514556 | orchestrator | Monday 16 March 2026 01:05:11 +0000 (0:00:01.001) 0:00:41.841 ********** 2026-03-16 01:07:16.514565 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.514573 | orchestrator | 2026-03-16 01:07:16.514582 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-16 01:07:16.514590 | orchestrator | Monday 16 March 2026 01:05:11 +0000 (0:00:00.146) 0:00:41.988 ********** 2026-03-16 01:07:16.514599 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.514608 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:16.514617 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:16.514625 | orchestrator | 2026-03-16 01:07:16.514634 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-16 01:07:16.514643 | orchestrator | Monday 16 March 2026 01:05:11 +0000 (0:00:00.271) 0:00:42.260 ********** 2026-03-16 01:07:16.514651 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:07:16.514660 | orchestrator | 2026-03-16 01:07:16.514669 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-16 01:07:16.514678 | orchestrator | Monday 16 March 2026 01:05:12 +0000 (0:00:00.526) 0:00:42.786 ********** 2026-03-16 01:07:16.514695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.514716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.514727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.514736 | orchestrator | 2026-03-16 01:07:16.514745 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-16 01:07:16.514754 | orchestrator | Monday 16 March 2026 01:05:16 +0000 (0:00:04.279) 0:00:47.066 ********** 2026-03-16 01:07:16.514771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-16 01:07:16.514786 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.514796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-16 01:07:16.514805 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:16.514820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-16 01:07:16.514835 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:16.514844 | orchestrator | 2026-03-16 01:07:16.514853 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-16 01:07:16.514890 | orchestrator | Monday 16 March 2026 01:05:20 +0000 (0:00:03.594) 0:00:50.660 ********** 2026-03-16 01:07:16.514905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-16 01:07:16.514915 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:16.514929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-16 01:07:16.514954 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:16.514975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-16 01:07:16.514990 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.515005 | orchestrator | 2026-03-16 01:07:16.515021 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-16 01:07:16.515036 | orchestrator | Monday 16 March 2026 01:05:23 +0000 (0:00:03.252) 0:00:53.913 ********** 2026-03-16 01:07:16.515051 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:16.515066 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.515107 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:16.515122 | orchestrator | 2026-03-16 01:07:16.515136 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-16 01:07:16.515149 | orchestrator | Monday 16 March 2026 01:05:27 +0000 (0:00:03.718) 0:00:57.631 ********** 2026-03-16 01:07:16.515163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.515202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.515217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.515238 | orchestrator | 2026-03-16 01:07:16.515254 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-16 01:07:16.515270 | orchestrator | Monday 16 March 2026 01:05:30 +0000 (0:00:03.383) 0:01:01.014 ********** 2026-03-16 01:07:16.515285 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:16.515300 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:07:16.515314 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:07:16.515329 | orchestrator | 2026-03-16 01:07:16.515344 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-16 01:07:16.515358 | orchestrator | Monday 16 March 2026 01:05:35 +0000 (0:00:05.106) 0:01:06.122 ********** 2026-03-16 01:07:16.515374 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.515389 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:16.515404 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:16.515420 | orchestrator | 2026-03-16 01:07:16.515435 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-16 01:07:16.515451 | orchestrator | Monday 16 March 2026 01:05:41 +0000 (0:00:05.939) 0:01:12.061 ********** 2026-03-16 01:07:16.515492 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:16.515518 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:16.515534 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.515549 | orchestrator | 2026-03-16 01:07:16.515565 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-16 01:07:16.515579 | orchestrator | Monday 16 March 2026 01:05:45 +0000 (0:00:04.037) 0:01:16.098 ********** 2026-03-16 01:07:16.515593 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:16.515608 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.515637 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:16.515653 | orchestrator | 2026-03-16 01:07:16.515667 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-16 01:07:16.515681 | orchestrator | Monday 16 March 2026 01:05:49 +0000 (0:00:03.636) 0:01:19.735 ********** 2026-03-16 01:07:16.515695 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.515709 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:16.515723 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:16.515737 | orchestrator | 2026-03-16 01:07:16.515751 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-16 01:07:16.515766 | orchestrator | Monday 16 March 2026 01:05:53 +0000 (0:00:04.036) 0:01:23.771 ********** 2026-03-16 01:07:16.515780 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.515794 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:16.515815 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:16.515829 | orchestrator | 2026-03-16 01:07:16.515843 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-16 01:07:16.515857 | orchestrator | Monday 16 March 2026 01:05:53 +0000 (0:00:00.337) 0:01:24.108 ********** 2026-03-16 01:07:16.515872 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-16 01:07:16.515887 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:16.515902 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-16 01:07:16.515917 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.515933 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-16 01:07:16.515947 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:16.515963 | orchestrator | 2026-03-16 01:07:16.515980 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-16 01:07:16.516008 | orchestrator | Monday 16 March 2026 01:05:59 +0000 (0:00:06.029) 0:01:30.137 ********** 2026-03-16 01:07:16.516025 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:07:16.516042 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:16.516059 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:07:16.516135 | orchestrator | 2026-03-16 01:07:16.516156 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-16 01:07:16.516173 | orchestrator | Monday 16 March 2026 01:06:04 +0000 (0:00:04.557) 0:01:34.695 ********** 2026-03-16 01:07:16.516192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.516233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.516255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-16 01:07:16.516283 | orchestrator | 2026-03-16 01:07:16.516298 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-16 01:07:16.516313 | orchestrator | Monday 16 March 2026 01:06:08 +0000 (0:00:04.448) 0:01:39.143 ********** 2026-03-16 01:07:16.516328 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:16.516344 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:16.516360 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:16.516376 | orchestrator | 2026-03-16 01:07:16.516393 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-16 01:07:16.516411 | orchestrator | Monday 16 March 2026 01:06:09 +0000 (0:00:00.300) 0:01:39.444 ********** 2026-03-16 01:07:16.516427 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:16.516443 | orchestrator | 2026-03-16 01:07:16.516458 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-16 01:07:16.516473 | orchestrator | Monday 16 March 2026 01:06:11 +0000 (0:00:02.493) 0:01:41.938 ********** 2026-03-16 01:07:16.516489 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:16.516505 | orchestrator | 2026-03-16 01:07:16.516521 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-16 01:07:16.516538 | orchestrator | Monday 16 March 2026 01:06:14 +0000 (0:00:02.592) 0:01:44.531 ********** 2026-03-16 01:07:16.516554 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:16.516570 | orchestrator | 2026-03-16 01:07:16.516585 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-16 01:07:16.516600 | orchestrator | Monday 16 March 2026 01:06:16 +0000 (0:00:02.493) 0:01:47.024 ********** 2026-03-16 01:07:16.516616 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:16.516632 | orchestrator | 2026-03-16 01:07:16.516647 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-16 01:07:16.516672 | orchestrator | Monday 16 March 2026 01:06:45 +0000 (0:00:28.860) 0:02:15.884 ********** 2026-03-16 01:07:16.516687 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:16.516701 | orchestrator | 2026-03-16 01:07:16.516716 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-16 01:07:16.516730 | orchestrator | Monday 16 March 2026 01:06:47 +0000 (0:00:02.253) 0:02:18.138 ********** 2026-03-16 01:07:16.516743 | orchestrator | 2026-03-16 01:07:16.516759 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-16 01:07:16.516784 | orchestrator | Monday 16 March 2026 01:06:48 +0000 (0:00:00.249) 0:02:18.387 ********** 2026-03-16 01:07:16.516797 | orchestrator | 2026-03-16 01:07:16.516811 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-16 01:07:16.516824 | orchestrator | Monday 16 March 2026 01:06:48 +0000 (0:00:00.080) 0:02:18.468 ********** 2026-03-16 01:07:16.516837 | orchestrator | 2026-03-16 01:07:16.516852 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-16 01:07:16.516867 | orchestrator | Monday 16 March 2026 01:06:48 +0000 (0:00:00.066) 0:02:18.534 ********** 2026-03-16 01:07:16.516882 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:16.516899 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:07:16.516919 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:07:16.516935 | orchestrator | 2026-03-16 01:07:16.516950 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:07:16.516967 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-16 01:07:16.516983 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-16 01:07:16.516998 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-16 01:07:16.517012 | orchestrator | 2026-03-16 01:07:16.517028 | orchestrator | 2026-03-16 01:07:16.517043 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:07:16.517058 | orchestrator | Monday 16 March 2026 01:07:15 +0000 (0:00:27.532) 0:02:46.067 ********** 2026-03-16 01:07:16.517093 | orchestrator | =============================================================================== 2026-03-16 01:07:16.517110 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.86s 2026-03-16 01:07:16.517125 | orchestrator | glance : Restart glance-api container ---------------------------------- 27.53s 2026-03-16 01:07:16.517140 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.02s 2026-03-16 01:07:16.517156 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.03s 2026-03-16 01:07:16.517171 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.94s 2026-03-16 01:07:16.517186 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.11s 2026-03-16 01:07:16.517202 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.10s 2026-03-16 01:07:16.517217 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.56s 2026-03-16 01:07:16.517232 | orchestrator | glance : Check glance containers ---------------------------------------- 4.45s 2026-03-16 01:07:16.517247 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.28s 2026-03-16 01:07:16.517262 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.16s 2026-03-16 01:07:16.517279 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.04s 2026-03-16 01:07:16.517293 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.04s 2026-03-16 01:07:16.517308 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.85s 2026-03-16 01:07:16.517323 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.72s 2026-03-16 01:07:16.517338 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.69s 2026-03-16 01:07:16.517353 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.64s 2026-03-16 01:07:16.517368 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.59s 2026-03-16 01:07:16.517384 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.48s 2026-03-16 01:07:16.517400 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.40s 2026-03-16 01:07:16.517629 | orchestrator | 2026-03-16 01:07:16 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:16.517657 | orchestrator | 2026-03-16 01:07:16 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:16.517673 | orchestrator | 2026-03-16 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:19.555409 | orchestrator | 2026-03-16 01:07:19 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:19.556767 | orchestrator | 2026-03-16 01:07:19 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:19.557790 | orchestrator | 2026-03-16 01:07:19 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:19.558640 | orchestrator | 2026-03-16 01:07:19 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:19.558679 | orchestrator | 2026-03-16 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:22.605352 | orchestrator | 2026-03-16 01:07:22 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:22.606572 | orchestrator | 2026-03-16 01:07:22 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:22.609454 | orchestrator | 2026-03-16 01:07:22 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:22.610122 | orchestrator | 2026-03-16 01:07:22 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:22.610138 | orchestrator | 2026-03-16 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:25.655646 | orchestrator | 2026-03-16 01:07:25 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:25.659345 | orchestrator | 2026-03-16 01:07:25 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:25.663927 | orchestrator | 2026-03-16 01:07:25 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:25.664850 | orchestrator | 2026-03-16 01:07:25 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:25.664892 | orchestrator | 2026-03-16 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:28.700063 | orchestrator | 2026-03-16 01:07:28 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:28.700392 | orchestrator | 2026-03-16 01:07:28 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:28.701454 | orchestrator | 2026-03-16 01:07:28 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:28.702528 | orchestrator | 2026-03-16 01:07:28 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:28.702553 | orchestrator | 2026-03-16 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:31.742664 | orchestrator | 2026-03-16 01:07:31 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:31.745819 | orchestrator | 2026-03-16 01:07:31 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:31.748994 | orchestrator | 2026-03-16 01:07:31 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:31.751227 | orchestrator | 2026-03-16 01:07:31 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:31.751275 | orchestrator | 2026-03-16 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:34.796498 | orchestrator | 2026-03-16 01:07:34 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:34.798864 | orchestrator | 2026-03-16 01:07:34 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:34.801025 | orchestrator | 2026-03-16 01:07:34 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:34.802304 | orchestrator | 2026-03-16 01:07:34 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:34.802345 | orchestrator | 2026-03-16 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:37.846784 | orchestrator | 2026-03-16 01:07:37 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:37.846859 | orchestrator | 2026-03-16 01:07:37 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:37.847509 | orchestrator | 2026-03-16 01:07:37 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:37.848760 | orchestrator | 2026-03-16 01:07:37 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:37.848795 | orchestrator | 2026-03-16 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:40.891914 | orchestrator | 2026-03-16 01:07:40 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:40.893483 | orchestrator | 2026-03-16 01:07:40 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:40.895528 | orchestrator | 2026-03-16 01:07:40 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:40.897149 | orchestrator | 2026-03-16 01:07:40 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:40.897198 | orchestrator | 2026-03-16 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:43.937575 | orchestrator | 2026-03-16 01:07:43 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:43.938220 | orchestrator | 2026-03-16 01:07:43 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:43.939561 | orchestrator | 2026-03-16 01:07:43 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:43.939710 | orchestrator | 2026-03-16 01:07:43 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:43.939723 | orchestrator | 2026-03-16 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:46.986217 | orchestrator | 2026-03-16 01:07:46 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:46.987538 | orchestrator | 2026-03-16 01:07:46 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:46.989487 | orchestrator | 2026-03-16 01:07:46 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:46.991188 | orchestrator | 2026-03-16 01:07:46 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:46.991249 | orchestrator | 2026-03-16 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:50.039802 | orchestrator | 2026-03-16 01:07:50 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:50.041944 | orchestrator | 2026-03-16 01:07:50 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:50.042866 | orchestrator | 2026-03-16 01:07:50 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:50.044118 | orchestrator | 2026-03-16 01:07:50 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:50.044209 | orchestrator | 2026-03-16 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:53.089548 | orchestrator | 2026-03-16 01:07:53 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:53.090663 | orchestrator | 2026-03-16 01:07:53 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state STARTED 2026-03-16 01:07:53.091430 | orchestrator | 2026-03-16 01:07:53 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:53.092333 | orchestrator | 2026-03-16 01:07:53 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:53.092401 | orchestrator | 2026-03-16 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:56.133369 | orchestrator | 2026-03-16 01:07:56 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:56.137566 | orchestrator | 2026-03-16 01:07:56 | INFO  | Task b5eb2108-9dfa-426c-b183-06439e490c0a is in state SUCCESS 2026-03-16 01:07:56.139276 | orchestrator | 2026-03-16 01:07:56.139324 | orchestrator | 2026-03-16 01:07:56.139332 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:07:56.139338 | orchestrator | 2026-03-16 01:07:56.139345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:07:56.139351 | orchestrator | Monday 16 March 2026 01:05:07 +0000 (0:00:01.127) 0:00:01.127 ********** 2026-03-16 01:07:56.139356 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:07:56.139363 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:07:56.139368 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:07:56.139374 | orchestrator | 2026-03-16 01:07:56.139379 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:07:56.139385 | orchestrator | Monday 16 March 2026 01:05:08 +0000 (0:00:00.758) 0:00:01.886 ********** 2026-03-16 01:07:56.139390 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-16 01:07:56.139396 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-16 01:07:56.139402 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-16 01:07:56.139407 | orchestrator | 2026-03-16 01:07:56.139413 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-16 01:07:56.139418 | orchestrator | 2026-03-16 01:07:56.139424 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-16 01:07:56.139429 | orchestrator | Monday 16 March 2026 01:05:09 +0000 (0:00:00.924) 0:00:02.811 ********** 2026-03-16 01:07:56.139434 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:07:56.139440 | orchestrator | 2026-03-16 01:07:56.139447 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-16 01:07:56.139456 | orchestrator | Monday 16 March 2026 01:05:10 +0000 (0:00:01.393) 0:00:04.204 ********** 2026-03-16 01:07:56.139462 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-16 01:07:56.139468 | orchestrator | 2026-03-16 01:07:56.139473 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-16 01:07:56.139479 | orchestrator | Monday 16 March 2026 01:05:14 +0000 (0:00:03.863) 0:00:08.067 ********** 2026-03-16 01:07:56.139484 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-16 01:07:56.139490 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-16 01:07:56.139495 | orchestrator | 2026-03-16 01:07:56.139500 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-16 01:07:56.139506 | orchestrator | Monday 16 March 2026 01:05:21 +0000 (0:00:06.827) 0:00:14.895 ********** 2026-03-16 01:07:56.139511 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-16 01:07:56.139517 | orchestrator | 2026-03-16 01:07:56.139544 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-16 01:07:56.139555 | orchestrator | Monday 16 March 2026 01:05:25 +0000 (0:00:04.213) 0:00:19.109 ********** 2026-03-16 01:07:56.139564 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-16 01:07:56.139573 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-16 01:07:56.139599 | orchestrator | 2026-03-16 01:07:56.139609 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-16 01:07:56.139619 | orchestrator | Monday 16 March 2026 01:05:29 +0000 (0:00:03.610) 0:00:22.719 ********** 2026-03-16 01:07:56.139639 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-16 01:07:56.139649 | orchestrator | 2026-03-16 01:07:56.139659 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-16 01:07:56.139669 | orchestrator | Monday 16 March 2026 01:05:32 +0000 (0:00:03.518) 0:00:26.237 ********** 2026-03-16 01:07:56.139678 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-16 01:07:56.139688 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-16 01:07:56.139698 | orchestrator | 2026-03-16 01:07:56.139708 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-16 01:07:56.139718 | orchestrator | Monday 16 March 2026 01:05:40 +0000 (0:00:07.825) 0:00:34.062 ********** 2026-03-16 01:07:56.139731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.139756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.139768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.139785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.139799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.139810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.139820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.139836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.139846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.140387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.140419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.140426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.140432 | orchestrator | 2026-03-16 01:07:56.140438 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-16 01:07:56.140444 | orchestrator | Monday 16 March 2026 01:05:42 +0000 (0:00:02.308) 0:00:36.371 ********** 2026-03-16 01:07:56.140449 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:56.140455 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:56.140460 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:56.140466 | orchestrator | 2026-03-16 01:07:56.140471 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-16 01:07:56.140476 | orchestrator | Monday 16 March 2026 01:05:43 +0000 (0:00:00.323) 0:00:36.695 ********** 2026-03-16 01:07:56.140482 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:07:56.140487 | orchestrator | 2026-03-16 01:07:56.140516 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-16 01:07:56.140523 | orchestrator | Monday 16 March 2026 01:05:44 +0000 (0:00:00.811) 0:00:37.507 ********** 2026-03-16 01:07:56.140528 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-16 01:07:56.140534 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-16 01:07:56.140539 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-16 01:07:56.140544 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-16 01:07:56.140549 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-16 01:07:56.140555 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-16 01:07:56.140567 | orchestrator | 2026-03-16 01:07:56.140573 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-16 01:07:56.140578 | orchestrator | Monday 16 March 2026 01:05:46 +0000 (0:00:01.981) 0:00:39.489 ********** 2026-03-16 01:07:56.140584 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-16 01:07:56.140591 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-16 01:07:56.140600 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-16 01:07:56.140606 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-16 01:07:56.140627 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-16 01:07:56.140637 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-16 01:07:56.140643 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-16 01:07:56.140653 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-16 01:07:56.140659 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-16 01:07:56.140680 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-16 01:07:56.140690 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-16 01:07:56.140696 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-16 01:07:56.140701 | orchestrator | 2026-03-16 01:07:56.140707 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-16 01:07:56.140712 | orchestrator | Monday 16 March 2026 01:05:49 +0000 (0:00:03.597) 0:00:43.087 ********** 2026-03-16 01:07:56.140718 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-16 01:07:56.140727 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-16 01:07:56.140732 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-16 01:07:56.140738 | orchestrator | 2026-03-16 01:07:56.140743 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-16 01:07:56.140749 | orchestrator | Monday 16 March 2026 01:05:52 +0000 (0:00:02.624) 0:00:45.711 ********** 2026-03-16 01:07:56.140754 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-16 01:07:56.140759 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-16 01:07:56.140765 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-16 01:07:56.140770 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-16 01:07:56.140775 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-16 01:07:56.140781 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-16 01:07:56.140786 | orchestrator | 2026-03-16 01:07:56.140792 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-16 01:07:56.140801 | orchestrator | Monday 16 March 2026 01:05:56 +0000 (0:00:04.346) 0:00:50.058 ********** 2026-03-16 01:07:56.140810 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-16 01:07:56.140820 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-16 01:07:56.140830 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-16 01:07:56.140840 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-16 01:07:56.140897 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-16 01:07:56.140911 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-16 01:07:56.140916 | orchestrator | 2026-03-16 01:07:56.140922 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-16 01:07:56.140927 | orchestrator | Monday 16 March 2026 01:05:58 +0000 (0:00:01.418) 0:00:51.476 ********** 2026-03-16 01:07:56.140932 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:56.140938 | orchestrator | 2026-03-16 01:07:56.140943 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-16 01:07:56.140948 | orchestrator | Monday 16 March 2026 01:05:58 +0000 (0:00:00.250) 0:00:51.727 ********** 2026-03-16 01:07:56.140954 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:56.140959 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:56.140999 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:56.141009 | orchestrator | 2026-03-16 01:07:56.141016 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-16 01:07:56.141025 | orchestrator | Monday 16 March 2026 01:05:58 +0000 (0:00:00.573) 0:00:52.300 ********** 2026-03-16 01:07:56.141034 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:07:56.141044 | orchestrator | 2026-03-16 01:07:56.141108 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-16 01:07:56.141122 | orchestrator | Monday 16 March 2026 01:06:00 +0000 (0:00:01.215) 0:00:53.515 ********** 2026-03-16 01:07:56.141133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.141144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.141154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.141169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141254 | orchestrator | 2026-03-16 01:07:56.141260 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-16 01:07:56.141265 | orchestrator | Monday 16 March 2026 01:06:04 +0000 (0:00:04.317) 0:00:57.833 ********** 2026-03-16 01:07:56.141271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 01:07:56.141279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 01:07:56.141289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141334 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:56.141339 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:56.141345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 01:07:56.141354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141372 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:56.141377 | orchestrator | 2026-03-16 01:07:56.141383 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-16 01:07:56.141391 | orchestrator | Monday 16 March 2026 01:06:05 +0000 (0:00:00.882) 0:00:58.715 ********** 2026-03-16 01:07:56.141402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 01:07:56.141412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141435 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:56.141440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 01:07:56.141452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141472 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:56.141478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 01:07:56.141483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141506 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:56.141512 | orchestrator | 2026-03-16 01:07:56.141517 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-16 01:07:56.141523 | orchestrator | Monday 16 March 2026 01:06:06 +0000 (0:00:01.552) 0:01:00.268 ********** 2026-03-16 01:07:56.141528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.141537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.141544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.141555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141641 | orchestrator | 2026-03-16 01:07:56.141646 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-16 01:07:56.141651 | orchestrator | Monday 16 March 2026 01:06:11 +0000 (0:00:04.872) 0:01:05.140 ********** 2026-03-16 01:07:56.141657 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-16 01:07:56.141666 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-16 01:07:56.141672 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-16 01:07:56.141677 | orchestrator | 2026-03-16 01:07:56.141682 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-16 01:07:56.141688 | orchestrator | Monday 16 March 2026 01:06:13 +0000 (0:00:01.534) 0:01:06.674 ********** 2026-03-16 01:07:56.141693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.141703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.141711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.141717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.141791 | orchestrator | 2026-03-16 01:07:56.141797 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-16 01:07:56.141804 | orchestrator | Monday 16 March 2026 01:06:24 +0000 (0:00:11.290) 0:01:17.965 ********** 2026-03-16 01:07:56.141810 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:56.141817 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:07:56.141823 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:07:56.141829 | orchestrator | 2026-03-16 01:07:56.141836 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-16 01:07:56.141842 | orchestrator | Monday 16 March 2026 01:06:26 +0000 (0:00:01.802) 0:01:19.768 ********** 2026-03-16 01:07:56.141848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 01:07:56.141858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141884 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:56.141891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 01:07:56.141897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141919 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:56.141928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-16 01:07:56.141938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-16 01:07:56.141958 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:56.141964 | orchestrator | 2026-03-16 01:07:56.141970 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-16 01:07:56.141979 | orchestrator | Monday 16 March 2026 01:06:27 +0000 (0:00:00.836) 0:01:20.604 ********** 2026-03-16 01:07:56.142082 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:56.142090 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:56.142096 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:56.142102 | orchestrator | 2026-03-16 01:07:56.142109 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-16 01:07:56.142116 | orchestrator | Monday 16 March 2026 01:06:27 +0000 (0:00:00.344) 0:01:20.948 ********** 2026-03-16 01:07:56.142123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.142134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.142146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-16 01:07:56.142153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.142162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.142168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.142173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.142186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.142193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.142198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.142208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.142214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-16 01:07:56.142220 | orchestrator | 2026-03-16 01:07:56.142229 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-16 01:07:56.142234 | orchestrator | Monday 16 March 2026 01:06:30 +0000 (0:00:03.435) 0:01:24.383 ********** 2026-03-16 01:07:56.142240 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:56.142245 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:07:56.142251 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:07:56.142256 | orchestrator | 2026-03-16 01:07:56.142262 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-16 01:07:56.142267 | orchestrator | Monday 16 March 2026 01:06:31 +0000 (0:00:00.517) 0:01:24.901 ********** 2026-03-16 01:07:56.142273 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:56.142278 | orchestrator | 2026-03-16 01:07:56.142283 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-16 01:07:56.142289 | orchestrator | Monday 16 March 2026 01:06:33 +0000 (0:00:02.467) 0:01:27.368 ********** 2026-03-16 01:07:56.142294 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:56.142300 | orchestrator | 2026-03-16 01:07:56.142305 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-16 01:07:56.142314 | orchestrator | Monday 16 March 2026 01:06:36 +0000 (0:00:02.586) 0:01:29.955 ********** 2026-03-16 01:07:56.142320 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:56.142325 | orchestrator | 2026-03-16 01:07:56.142330 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-16 01:07:56.142336 | orchestrator | Monday 16 March 2026 01:06:56 +0000 (0:00:19.629) 0:01:49.585 ********** 2026-03-16 01:07:56.142341 | orchestrator | 2026-03-16 01:07:56.142347 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-16 01:07:56.142352 | orchestrator | Monday 16 March 2026 01:06:56 +0000 (0:00:00.078) 0:01:49.663 ********** 2026-03-16 01:07:56.142357 | orchestrator | 2026-03-16 01:07:56.142363 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-16 01:07:56.142368 | orchestrator | Monday 16 March 2026 01:06:56 +0000 (0:00:00.068) 0:01:49.731 ********** 2026-03-16 01:07:56.142374 | orchestrator | 2026-03-16 01:07:56.142379 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-16 01:07:56.142385 | orchestrator | Monday 16 March 2026 01:06:56 +0000 (0:00:00.069) 0:01:49.801 ********** 2026-03-16 01:07:56.142390 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:56.142396 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:07:56.142401 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:07:56.142406 | orchestrator | 2026-03-16 01:07:56.142411 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-16 01:07:56.142416 | orchestrator | Monday 16 March 2026 01:07:16 +0000 (0:00:20.410) 0:02:10.211 ********** 2026-03-16 01:07:56.142421 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:56.142426 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:07:56.142431 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:07:56.142436 | orchestrator | 2026-03-16 01:07:56.142442 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-16 01:07:56.142447 | orchestrator | Monday 16 March 2026 01:07:27 +0000 (0:00:10.697) 0:02:20.908 ********** 2026-03-16 01:07:56.142452 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:56.142457 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:07:56.142462 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:07:56.142467 | orchestrator | 2026-03-16 01:07:56.142473 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-16 01:07:56.142478 | orchestrator | Monday 16 March 2026 01:07:49 +0000 (0:00:21.872) 0:02:42.780 ********** 2026-03-16 01:07:56.142483 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:07:56.142488 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:07:56.142493 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:07:56.142498 | orchestrator | 2026-03-16 01:07:56.142503 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-16 01:07:56.142553 | orchestrator | Monday 16 March 2026 01:07:55 +0000 (0:00:05.968) 0:02:48.749 ********** 2026-03-16 01:07:56.142564 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:07:56.142569 | orchestrator | 2026-03-16 01:07:56.142574 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:07:56.142580 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-16 01:07:56.142586 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 01:07:56.142594 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 01:07:56.142599 | orchestrator | 2026-03-16 01:07:56.142604 | orchestrator | 2026-03-16 01:07:56.142609 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:07:56.142614 | orchestrator | Monday 16 March 2026 01:07:55 +0000 (0:00:00.251) 0:02:49.000 ********** 2026-03-16 01:07:56.142620 | orchestrator | =============================================================================== 2026-03-16 01:07:56.142625 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 21.87s 2026-03-16 01:07:56.142630 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 20.41s 2026-03-16 01:07:56.142635 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.63s 2026-03-16 01:07:56.142640 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.29s 2026-03-16 01:07:56.142645 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.70s 2026-03-16 01:07:56.142650 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.83s 2026-03-16 01:07:56.142655 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.83s 2026-03-16 01:07:56.142660 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.97s 2026-03-16 01:07:56.142665 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.87s 2026-03-16 01:07:56.142670 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.35s 2026-03-16 01:07:56.142675 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.32s 2026-03-16 01:07:56.142680 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 4.21s 2026-03-16 01:07:56.142685 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.86s 2026-03-16 01:07:56.142690 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.61s 2026-03-16 01:07:56.142695 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.60s 2026-03-16 01:07:56.142701 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.52s 2026-03-16 01:07:56.142706 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.44s 2026-03-16 01:07:56.142714 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.62s 2026-03-16 01:07:56.142719 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.59s 2026-03-16 01:07:56.142724 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.47s 2026-03-16 01:07:56.142729 | orchestrator | 2026-03-16 01:07:56 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:56.142735 | orchestrator | 2026-03-16 01:07:56 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:56.142740 | orchestrator | 2026-03-16 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:07:59.182504 | orchestrator | 2026-03-16 01:07:59 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:07:59.183115 | orchestrator | 2026-03-16 01:07:59 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:07:59.185083 | orchestrator | 2026-03-16 01:07:59 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:07:59.185121 | orchestrator | 2026-03-16 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:02.231583 | orchestrator | 2026-03-16 01:08:02 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:02.232834 | orchestrator | 2026-03-16 01:08:02 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:02.234074 | orchestrator | 2026-03-16 01:08:02 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:02.234107 | orchestrator | 2026-03-16 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:05.276372 | orchestrator | 2026-03-16 01:08:05 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:05.276635 | orchestrator | 2026-03-16 01:08:05 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:05.277919 | orchestrator | 2026-03-16 01:08:05 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:05.278060 | orchestrator | 2026-03-16 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:08.320848 | orchestrator | 2026-03-16 01:08:08 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:08.321795 | orchestrator | 2026-03-16 01:08:08 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:08.322548 | orchestrator | 2026-03-16 01:08:08 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:08.322673 | orchestrator | 2026-03-16 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:11.372261 | orchestrator | 2026-03-16 01:08:11 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:11.373880 | orchestrator | 2026-03-16 01:08:11 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:11.376146 | orchestrator | 2026-03-16 01:08:11 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:11.376189 | orchestrator | 2026-03-16 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:14.423066 | orchestrator | 2026-03-16 01:08:14 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:14.423614 | orchestrator | 2026-03-16 01:08:14 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:14.424716 | orchestrator | 2026-03-16 01:08:14 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:14.424748 | orchestrator | 2026-03-16 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:17.478640 | orchestrator | 2026-03-16 01:08:17 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:17.479759 | orchestrator | 2026-03-16 01:08:17 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:17.480697 | orchestrator | 2026-03-16 01:08:17 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:17.480734 | orchestrator | 2026-03-16 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:20.526344 | orchestrator | 2026-03-16 01:08:20 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:20.527519 | orchestrator | 2026-03-16 01:08:20 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:20.529135 | orchestrator | 2026-03-16 01:08:20 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:20.529486 | orchestrator | 2026-03-16 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:23.578864 | orchestrator | 2026-03-16 01:08:23 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:23.581205 | orchestrator | 2026-03-16 01:08:23 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:23.584207 | orchestrator | 2026-03-16 01:08:23 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:23.584261 | orchestrator | 2026-03-16 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:26.639166 | orchestrator | 2026-03-16 01:08:26 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:26.641964 | orchestrator | 2026-03-16 01:08:26 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:26.644288 | orchestrator | 2026-03-16 01:08:26 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:26.644337 | orchestrator | 2026-03-16 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:29.690477 | orchestrator | 2026-03-16 01:08:29 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:29.692363 | orchestrator | 2026-03-16 01:08:29 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:29.694779 | orchestrator | 2026-03-16 01:08:29 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:29.695101 | orchestrator | 2026-03-16 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:32.743459 | orchestrator | 2026-03-16 01:08:32 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:32.745686 | orchestrator | 2026-03-16 01:08:32 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:32.747419 | orchestrator | 2026-03-16 01:08:32 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:32.747484 | orchestrator | 2026-03-16 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:35.787764 | orchestrator | 2026-03-16 01:08:35 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:35.788738 | orchestrator | 2026-03-16 01:08:35 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:35.790543 | orchestrator | 2026-03-16 01:08:35 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:35.790620 | orchestrator | 2026-03-16 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:38.841263 | orchestrator | 2026-03-16 01:08:38 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:38.843122 | orchestrator | 2026-03-16 01:08:38 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:38.845232 | orchestrator | 2026-03-16 01:08:38 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:38.845322 | orchestrator | 2026-03-16 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:41.891319 | orchestrator | 2026-03-16 01:08:41 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:41.893478 | orchestrator | 2026-03-16 01:08:41 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:41.895432 | orchestrator | 2026-03-16 01:08:41 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:41.895641 | orchestrator | 2026-03-16 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:44.941004 | orchestrator | 2026-03-16 01:08:44 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:44.942278 | orchestrator | 2026-03-16 01:08:44 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:44.943805 | orchestrator | 2026-03-16 01:08:44 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:44.943850 | orchestrator | 2026-03-16 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:47.990583 | orchestrator | 2026-03-16 01:08:47 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:47.992665 | orchestrator | 2026-03-16 01:08:47 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:47.995023 | orchestrator | 2026-03-16 01:08:47 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:47.995089 | orchestrator | 2026-03-16 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:51.041152 | orchestrator | 2026-03-16 01:08:51 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:51.042671 | orchestrator | 2026-03-16 01:08:51 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:51.043234 | orchestrator | 2026-03-16 01:08:51 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:51.043271 | orchestrator | 2026-03-16 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:54.089454 | orchestrator | 2026-03-16 01:08:54 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:54.089541 | orchestrator | 2026-03-16 01:08:54 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:54.090540 | orchestrator | 2026-03-16 01:08:54 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:54.090587 | orchestrator | 2026-03-16 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:08:57.128185 | orchestrator | 2026-03-16 01:08:57 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:08:57.128268 | orchestrator | 2026-03-16 01:08:57 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:08:57.131593 | orchestrator | 2026-03-16 01:08:57 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:08:57.131663 | orchestrator | 2026-03-16 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:00.169143 | orchestrator | 2026-03-16 01:09:00 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:00.170993 | orchestrator | 2026-03-16 01:09:00 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:00.172897 | orchestrator | 2026-03-16 01:09:00 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:00.172946 | orchestrator | 2026-03-16 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:03.229575 | orchestrator | 2026-03-16 01:09:03 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:03.231201 | orchestrator | 2026-03-16 01:09:03 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:03.233899 | orchestrator | 2026-03-16 01:09:03 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:03.233970 | orchestrator | 2026-03-16 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:06.280590 | orchestrator | 2026-03-16 01:09:06 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:06.281375 | orchestrator | 2026-03-16 01:09:06 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:06.283030 | orchestrator | 2026-03-16 01:09:06 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:06.283084 | orchestrator | 2026-03-16 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:09.363778 | orchestrator | 2026-03-16 01:09:09 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:09.366228 | orchestrator | 2026-03-16 01:09:09 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:09.368803 | orchestrator | 2026-03-16 01:09:09 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:09.368900 | orchestrator | 2026-03-16 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:12.422356 | orchestrator | 2026-03-16 01:09:12 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:12.424607 | orchestrator | 2026-03-16 01:09:12 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:12.426303 | orchestrator | 2026-03-16 01:09:12 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:12.426374 | orchestrator | 2026-03-16 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:15.470777 | orchestrator | 2026-03-16 01:09:15 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:15.473311 | orchestrator | 2026-03-16 01:09:15 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:15.475388 | orchestrator | 2026-03-16 01:09:15 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:15.475423 | orchestrator | 2026-03-16 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:18.516563 | orchestrator | 2026-03-16 01:09:18 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:18.518208 | orchestrator | 2026-03-16 01:09:18 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:18.520249 | orchestrator | 2026-03-16 01:09:18 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:18.520301 | orchestrator | 2026-03-16 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:21.565375 | orchestrator | 2026-03-16 01:09:21 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:21.566884 | orchestrator | 2026-03-16 01:09:21 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:21.568776 | orchestrator | 2026-03-16 01:09:21 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:21.568937 | orchestrator | 2026-03-16 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:24.609086 | orchestrator | 2026-03-16 01:09:24 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:24.609180 | orchestrator | 2026-03-16 01:09:24 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:24.610173 | orchestrator | 2026-03-16 01:09:24 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:24.610223 | orchestrator | 2026-03-16 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:27.647600 | orchestrator | 2026-03-16 01:09:27 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:27.648282 | orchestrator | 2026-03-16 01:09:27 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:27.649628 | orchestrator | 2026-03-16 01:09:27 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:27.649675 | orchestrator | 2026-03-16 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:30.697741 | orchestrator | 2026-03-16 01:09:30 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:30.699615 | orchestrator | 2026-03-16 01:09:30 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:30.700944 | orchestrator | 2026-03-16 01:09:30 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:30.701025 | orchestrator | 2026-03-16 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:33.755574 | orchestrator | 2026-03-16 01:09:33 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:33.757072 | orchestrator | 2026-03-16 01:09:33 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state STARTED 2026-03-16 01:09:33.759154 | orchestrator | 2026-03-16 01:09:33 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:33.760650 | orchestrator | 2026-03-16 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:36.802973 | orchestrator | 2026-03-16 01:09:36 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:09:36.804927 | orchestrator | 2026-03-16 01:09:36 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:36.806237 | orchestrator | 2026-03-16 01:09:36 | INFO  | Task 8e3c010c-076b-473a-afb5-bc37eea7f35e is in state SUCCESS 2026-03-16 01:09:36.808293 | orchestrator | 2026-03-16 01:09:36 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:36.808349 | orchestrator | 2026-03-16 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:39.851824 | orchestrator | 2026-03-16 01:09:39 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:09:39.853544 | orchestrator | 2026-03-16 01:09:39 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:39.855459 | orchestrator | 2026-03-16 01:09:39 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state STARTED 2026-03-16 01:09:39.855949 | orchestrator | 2026-03-16 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:42.899172 | orchestrator | 2026-03-16 01:09:42 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:09:42.900689 | orchestrator | 2026-03-16 01:09:42 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:42.904068 | orchestrator | 2026-03-16 01:09:42 | INFO  | Task 438dc671-2c9a-472b-a0b1-1982c7437c48 is in state SUCCESS 2026-03-16 01:09:42.904160 | orchestrator | 2026-03-16 01:09:42.904168 | orchestrator | 2026-03-16 01:09:42.904173 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:09:42.904178 | orchestrator | 2026-03-16 01:09:42.904182 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:09:42.904186 | orchestrator | Monday 16 March 2026 01:06:28 +0000 (0:00:00.193) 0:00:00.193 ********** 2026-03-16 01:09:42.904190 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:09:42.904195 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:09:42.904199 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:09:42.904202 | orchestrator | 2026-03-16 01:09:42.904206 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:09:42.904231 | orchestrator | Monday 16 March 2026 01:06:28 +0000 (0:00:00.298) 0:00:00.491 ********** 2026-03-16 01:09:42.904236 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-16 01:09:42.904241 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-16 01:09:42.904244 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-16 01:09:42.904248 | orchestrator | 2026-03-16 01:09:42.904252 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-16 01:09:42.904256 | orchestrator | 2026-03-16 01:09:42.904260 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-16 01:09:42.904263 | orchestrator | Monday 16 March 2026 01:06:29 +0000 (0:00:00.654) 0:00:01.146 ********** 2026-03-16 01:09:42.904267 | orchestrator | 2026-03-16 01:09:42.904271 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-16 01:09:42.904275 | orchestrator | 2026-03-16 01:09:42.904278 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-16 01:09:42.904282 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:09:42.904286 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:09:42.904290 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:09:42.904294 | orchestrator | 2026-03-16 01:09:42.904297 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:09:42.904302 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:09:42.904308 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:09:42.904312 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:09:42.904315 | orchestrator | 2026-03-16 01:09:42.904319 | orchestrator | 2026-03-16 01:09:42.904323 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:09:42.904327 | orchestrator | Monday 16 March 2026 01:09:33 +0000 (0:03:04.764) 0:03:05.910 ********** 2026-03-16 01:09:42.904331 | orchestrator | =============================================================================== 2026-03-16 01:09:42.904334 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 184.76s 2026-03-16 01:09:42.904338 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-03-16 01:09:42.904342 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-03-16 01:09:42.904346 | orchestrator | 2026-03-16 01:09:42.906233 | orchestrator | 2026-03-16 01:09:42.906364 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:09:42.906375 | orchestrator | 2026-03-16 01:09:42.906379 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:09:42.906384 | orchestrator | Monday 16 March 2026 01:07:21 +0000 (0:00:00.271) 0:00:00.271 ********** 2026-03-16 01:09:42.906388 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:09:42.906393 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:09:42.906397 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:09:42.906401 | orchestrator | 2026-03-16 01:09:42.906405 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:09:42.906409 | orchestrator | Monday 16 March 2026 01:07:21 +0000 (0:00:00.323) 0:00:00.595 ********** 2026-03-16 01:09:42.906413 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-16 01:09:42.906417 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-16 01:09:42.906421 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-16 01:09:42.906424 | orchestrator | 2026-03-16 01:09:42.906428 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-16 01:09:42.906432 | orchestrator | 2026-03-16 01:09:42.906436 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-16 01:09:42.906440 | orchestrator | Monday 16 March 2026 01:07:22 +0000 (0:00:00.420) 0:00:01.015 ********** 2026-03-16 01:09:42.906458 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:09:42.906462 | orchestrator | 2026-03-16 01:09:42.906468 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-16 01:09:42.906474 | orchestrator | Monday 16 March 2026 01:07:22 +0000 (0:00:00.551) 0:00:01.567 ********** 2026-03-16 01:09:42.906486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.906507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.906516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.906523 | orchestrator | 2026-03-16 01:09:42.906529 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-16 01:09:42.906536 | orchestrator | Monday 16 March 2026 01:07:23 +0000 (0:00:00.744) 0:00:02.312 ********** 2026-03-16 01:09:42.906542 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-16 01:09:42.906549 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-16 01:09:42.906555 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 01:09:42.906562 | orchestrator | 2026-03-16 01:09:42.906577 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-16 01:09:42.906583 | orchestrator | Monday 16 March 2026 01:07:24 +0000 (0:00:00.910) 0:00:03.223 ********** 2026-03-16 01:09:42.906589 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:09:42.906596 | orchestrator | 2026-03-16 01:09:42.906602 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-16 01:09:42.906609 | orchestrator | Monday 16 March 2026 01:07:25 +0000 (0:00:00.792) 0:00:04.016 ********** 2026-03-16 01:09:42.906635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.906660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.906668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.906675 | orchestrator | 2026-03-16 01:09:42.906681 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-16 01:09:42.906688 | orchestrator | Monday 16 March 2026 01:07:26 +0000 (0:00:01.275) 0:00:05.291 ********** 2026-03-16 01:09:42.906694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-16 01:09:42.906700 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:09:42.906707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-16 01:09:42.906714 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:09:42.906731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-16 01:09:42.906745 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:09:42.906797 | orchestrator | 2026-03-16 01:09:42.906804 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-16 01:09:42.906810 | orchestrator | Monday 16 March 2026 01:07:26 +0000 (0:00:00.363) 0:00:05.655 ********** 2026-03-16 01:09:42.906816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-16 01:09:42.906823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-16 01:09:42.906830 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:09:42.906837 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:09:42.906844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-16 01:09:42.906850 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:09:42.906856 | orchestrator | 2026-03-16 01:09:42.906862 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-16 01:09:42.906869 | orchestrator | Monday 16 March 2026 01:07:27 +0000 (0:00:00.933) 0:00:06.589 ********** 2026-03-16 01:09:42.906876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.906888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.906906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.906913 | orchestrator | 2026-03-16 01:09:42.906920 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-16 01:09:42.906926 | orchestrator | Monday 16 March 2026 01:07:29 +0000 (0:00:01.576) 0:00:08.165 ********** 2026-03-16 01:09:42.906933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.907023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.907030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.907038 | orchestrator | 2026-03-16 01:09:42.907047 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-16 01:09:42.907053 | orchestrator | Monday 16 March 2026 01:07:30 +0000 (0:00:01.579) 0:00:09.745 ********** 2026-03-16 01:09:42.907059 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:09:42.907065 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:09:42.907070 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:09:42.907076 | orchestrator | 2026-03-16 01:09:42.907092 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-16 01:09:42.907098 | orchestrator | Monday 16 March 2026 01:07:31 +0000 (0:00:00.510) 0:00:10.255 ********** 2026-03-16 01:09:42.907103 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-16 01:09:42.907110 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-16 01:09:42.907115 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-16 01:09:42.907121 | orchestrator | 2026-03-16 01:09:42.907128 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-16 01:09:42.907134 | orchestrator | Monday 16 March 2026 01:07:32 +0000 (0:00:01.176) 0:00:11.431 ********** 2026-03-16 01:09:42.907141 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-16 01:09:42.907154 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-16 01:09:42.907165 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-16 01:09:42.907173 | orchestrator | 2026-03-16 01:09:42.907177 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-16 01:09:42.907180 | orchestrator | Monday 16 March 2026 01:07:33 +0000 (0:00:01.179) 0:00:12.611 ********** 2026-03-16 01:09:42.907184 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 01:09:42.907188 | orchestrator | 2026-03-16 01:09:42.907191 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-16 01:09:42.907195 | orchestrator | Monday 16 March 2026 01:07:34 +0000 (0:00:00.751) 0:00:13.362 ********** 2026-03-16 01:09:42.907199 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-16 01:09:42.907203 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-16 01:09:42.907206 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:09:42.907210 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:09:42.907214 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:09:42.907218 | orchestrator | 2026-03-16 01:09:42.907221 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-16 01:09:42.907225 | orchestrator | Monday 16 March 2026 01:07:35 +0000 (0:00:00.730) 0:00:14.093 ********** 2026-03-16 01:09:42.907229 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:09:42.907233 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:09:42.907236 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:09:42.907240 | orchestrator | 2026-03-16 01:09:42.907244 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-16 01:09:42.907247 | orchestrator | Monday 16 March 2026 01:07:35 +0000 (0:00:00.526) 0:00:14.620 ********** 2026-03-16 01:09:42.907252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083051, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8616846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083051, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8616846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083051, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8616846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1083084, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8791568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1083084, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8791568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1083084, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8791568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083058, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8646848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083058, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8646848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083058, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8646848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1083086, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8812585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1083086, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8812585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1083086, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8812585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083070, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8706849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083070, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8706849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083070, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8706849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1083079, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.876685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1083079, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.876685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1083079, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.876685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083050, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8612018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083050, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8612018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083050, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8612018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083053, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8616846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083053, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8616846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083053, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8616846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083060, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8656847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083060, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8656847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083060, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8656847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1083075, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8726847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1083075, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8726847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1083075, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8726847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1083083, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8779938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1083083, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8779938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1083083, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8779938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083055, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8636847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083055, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8636847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083055, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8636847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1083078, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8746848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1083078, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8746848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1083078, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8746848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083073, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8726847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083073, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8726847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083073, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8726847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083068, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8696847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083068, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8696847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083068, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8696847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083066, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8696847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083066, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8696847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083066, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8696847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1083076, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8746848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1083076, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8746848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1083076, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8746848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083061, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8687496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083061, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8687496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083061, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8687496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1083081, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8779938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1083081, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8779938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1083081, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8779938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1083163, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9166856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.907996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1083163, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9166856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1083163, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9166856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1083105, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.891685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1083105, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.891685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1083105, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.891685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1083098, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8841603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1083098, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8841603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1083117, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8946853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1083098, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8841603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1083117, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8946853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1083091, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8820863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1083117, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8946853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1083091, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8820863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1083150, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9086854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1083091, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8820863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1083150, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9086854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1083118, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9068978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1083150, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9086854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1083118, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9068978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1083152, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.910997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1083118, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9068978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1083152, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.910997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1083161, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9156854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1083152, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.910997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1083161, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9156854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1083149, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9086854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1083161, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9156854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1083149, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9086854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1083112, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.893978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1083149, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9086854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1083112, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.893978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1083103, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8866851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1083112, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.893978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1083103, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8866851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1083109, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.891685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1083103, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8866851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1083109, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.891685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1083100, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.885685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1083109, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.891685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1083100, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.885685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1083114, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.893978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1083100, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.885685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1083114, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.893978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1083159, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9146855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1083114, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.893978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1083159, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9146855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1083157, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9126854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1083159, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9146855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1083157, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9126854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1083093, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8831122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1083157, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9126854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1083093, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8831122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1083094, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8841603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1083093, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8831122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1083094, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8841603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1083143, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9085207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1083094, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.8841603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1083143, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9085207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1083155, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.910997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1083143, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.9085207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1083155, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.910997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1083155, 'dev': 124, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773620286.910997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-16 01:09:42.908574 | orchestrator | 2026-03-16 01:09:42.908580 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-16 01:09:42.908587 | orchestrator | Monday 16 March 2026 01:08:13 +0000 (0:00:37.858) 0:00:52.478 ********** 2026-03-16 01:09:42.908594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.908601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.908612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-16 01:09:42.908617 | orchestrator | 2026-03-16 01:09:42.908622 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-16 01:09:42.908629 | orchestrator | Monday 16 March 2026 01:08:14 +0000 (0:00:01.059) 0:00:53.538 ********** 2026-03-16 01:09:42.908639 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:09:42.908646 | orchestrator | 2026-03-16 01:09:42.908652 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-16 01:09:42.908709 | orchestrator | Monday 16 March 2026 01:08:17 +0000 (0:00:02.722) 0:00:56.261 ********** 2026-03-16 01:09:42.908719 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:09:42.908725 | orchestrator | 2026-03-16 01:09:42.908731 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-16 01:09:42.908736 | orchestrator | Monday 16 March 2026 01:08:20 +0000 (0:00:02.714) 0:00:58.975 ********** 2026-03-16 01:09:42.908741 | orchestrator | 2026-03-16 01:09:42.908747 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-16 01:09:42.908767 | orchestrator | Monday 16 March 2026 01:08:20 +0000 (0:00:00.068) 0:00:59.044 ********** 2026-03-16 01:09:42.908773 | orchestrator | 2026-03-16 01:09:42.908779 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-16 01:09:42.908784 | orchestrator | Monday 16 March 2026 01:08:20 +0000 (0:00:00.067) 0:00:59.111 ********** 2026-03-16 01:09:42.908790 | orchestrator | 2026-03-16 01:09:42.908796 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-16 01:09:42.908802 | orchestrator | Monday 16 March 2026 01:08:20 +0000 (0:00:00.255) 0:00:59.367 ********** 2026-03-16 01:09:42.908808 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:09:42.908814 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:09:42.908820 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:09:42.908825 | orchestrator | 2026-03-16 01:09:42.908831 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-16 01:09:42.908837 | orchestrator | Monday 16 March 2026 01:08:27 +0000 (0:00:06.757) 0:01:06.125 ********** 2026-03-16 01:09:42.908843 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:09:42.908849 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:09:42.908855 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-16 01:09:42.908862 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-16 01:09:42.908868 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-16 01:09:42.908874 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:09:42.908880 | orchestrator | 2026-03-16 01:09:42.908887 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-16 01:09:42.908892 | orchestrator | Monday 16 March 2026 01:09:06 +0000 (0:00:38.961) 0:01:45.087 ********** 2026-03-16 01:09:42.908897 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:09:42.908903 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:09:42.908916 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:09:42.908922 | orchestrator | 2026-03-16 01:09:42.908928 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-16 01:09:42.908934 | orchestrator | Monday 16 March 2026 01:09:35 +0000 (0:00:29.551) 0:02:14.638 ********** 2026-03-16 01:09:42.908940 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:09:42.908945 | orchestrator | 2026-03-16 01:09:42.908951 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-16 01:09:42.908957 | orchestrator | Monday 16 March 2026 01:09:38 +0000 (0:00:02.696) 0:02:17.335 ********** 2026-03-16 01:09:42.908964 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:09:42.908969 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:09:42.908975 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:09:42.908980 | orchestrator | 2026-03-16 01:09:42.908986 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-16 01:09:42.908992 | orchestrator | Monday 16 March 2026 01:09:38 +0000 (0:00:00.512) 0:02:17.848 ********** 2026-03-16 01:09:42.908999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-16 01:09:42.909007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-16 01:09:42.909014 | orchestrator | 2026-03-16 01:09:42.909020 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-16 01:09:42.909026 | orchestrator | Monday 16 March 2026 01:09:41 +0000 (0:00:02.653) 0:02:20.502 ********** 2026-03-16 01:09:42.909032 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:09:42.909039 | orchestrator | 2026-03-16 01:09:42.909045 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:09:42.909052 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 01:09:42.909060 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 01:09:42.909066 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 01:09:42.909072 | orchestrator | 2026-03-16 01:09:42.909078 | orchestrator | 2026-03-16 01:09:42.909084 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:09:42.909090 | orchestrator | Monday 16 March 2026 01:09:41 +0000 (0:00:00.277) 0:02:20.780 ********** 2026-03-16 01:09:42.909097 | orchestrator | =============================================================================== 2026-03-16 01:09:42.909108 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.96s 2026-03-16 01:09:42.909120 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.86s 2026-03-16 01:09:42.909126 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 29.55s 2026-03-16 01:09:42.909132 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.76s 2026-03-16 01:09:42.909138 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.72s 2026-03-16 01:09:42.909145 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.71s 2026-03-16 01:09:42.909151 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.70s 2026-03-16 01:09:42.909157 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.65s 2026-03-16 01:09:42.909169 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.58s 2026-03-16 01:09:42.909172 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.58s 2026-03-16 01:09:42.909176 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.28s 2026-03-16 01:09:42.909180 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.18s 2026-03-16 01:09:42.909185 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.18s 2026-03-16 01:09:42.909191 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.06s 2026-03-16 01:09:42.909197 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.93s 2026-03-16 01:09:42.909202 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.91s 2026-03-16 01:09:42.909212 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.79s 2026-03-16 01:09:42.909221 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.75s 2026-03-16 01:09:42.909226 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.75s 2026-03-16 01:09:42.909231 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2026-03-16 01:09:42.909238 | orchestrator | 2026-03-16 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:45.948257 | orchestrator | 2026-03-16 01:09:45 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:09:45.950246 | orchestrator | 2026-03-16 01:09:45 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:45.950324 | orchestrator | 2026-03-16 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:48.995006 | orchestrator | 2026-03-16 01:09:48 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:09:48.996082 | orchestrator | 2026-03-16 01:09:49 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:48.996151 | orchestrator | 2026-03-16 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:52.046140 | orchestrator | 2026-03-16 01:09:52 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:09:52.047199 | orchestrator | 2026-03-16 01:09:52 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:52.047393 | orchestrator | 2026-03-16 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:55.089096 | orchestrator | 2026-03-16 01:09:55 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:09:55.089168 | orchestrator | 2026-03-16 01:09:55 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:55.089174 | orchestrator | 2026-03-16 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:09:58.141381 | orchestrator | 2026-03-16 01:09:58 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:09:58.144375 | orchestrator | 2026-03-16 01:09:58 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:09:58.144711 | orchestrator | 2026-03-16 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:01.190609 | orchestrator | 2026-03-16 01:10:01 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:01.191696 | orchestrator | 2026-03-16 01:10:01 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:01.191876 | orchestrator | 2026-03-16 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:04.233638 | orchestrator | 2026-03-16 01:10:04 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:04.235653 | orchestrator | 2026-03-16 01:10:04 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:04.235762 | orchestrator | 2026-03-16 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:07.283598 | orchestrator | 2026-03-16 01:10:07 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:07.286242 | orchestrator | 2026-03-16 01:10:07 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:07.286335 | orchestrator | 2026-03-16 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:10.335067 | orchestrator | 2026-03-16 01:10:10 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:10.335619 | orchestrator | 2026-03-16 01:10:10 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:10.335654 | orchestrator | 2026-03-16 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:13.395016 | orchestrator | 2026-03-16 01:10:13 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:13.396465 | orchestrator | 2026-03-16 01:10:13 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:13.396544 | orchestrator | 2026-03-16 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:16.435968 | orchestrator | 2026-03-16 01:10:16 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:16.440318 | orchestrator | 2026-03-16 01:10:16 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:16.440396 | orchestrator | 2026-03-16 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:19.501482 | orchestrator | 2026-03-16 01:10:19 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:19.504054 | orchestrator | 2026-03-16 01:10:19 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:19.504116 | orchestrator | 2026-03-16 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:22.549907 | orchestrator | 2026-03-16 01:10:22 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:22.551208 | orchestrator | 2026-03-16 01:10:22 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:22.551274 | orchestrator | 2026-03-16 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:25.595638 | orchestrator | 2026-03-16 01:10:25 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:25.596314 | orchestrator | 2026-03-16 01:10:25 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:25.596351 | orchestrator | 2026-03-16 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:28.656823 | orchestrator | 2026-03-16 01:10:28 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:28.658361 | orchestrator | 2026-03-16 01:10:28 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:28.658425 | orchestrator | 2026-03-16 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:31.701502 | orchestrator | 2026-03-16 01:10:31 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:31.703081 | orchestrator | 2026-03-16 01:10:31 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:31.703149 | orchestrator | 2026-03-16 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:34.748570 | orchestrator | 2026-03-16 01:10:34 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:34.749447 | orchestrator | 2026-03-16 01:10:34 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:34.749482 | orchestrator | 2026-03-16 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:37.784297 | orchestrator | 2026-03-16 01:10:37 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:37.784547 | orchestrator | 2026-03-16 01:10:37 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:37.784568 | orchestrator | 2026-03-16 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:40.830247 | orchestrator | 2026-03-16 01:10:40 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:40.831711 | orchestrator | 2026-03-16 01:10:40 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:40.831754 | orchestrator | 2026-03-16 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:43.862810 | orchestrator | 2026-03-16 01:10:43 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:43.863707 | orchestrator | 2026-03-16 01:10:43 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:43.863737 | orchestrator | 2026-03-16 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:46.903930 | orchestrator | 2026-03-16 01:10:46 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:46.905185 | orchestrator | 2026-03-16 01:10:46 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:46.905210 | orchestrator | 2026-03-16 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:49.951415 | orchestrator | 2026-03-16 01:10:49 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:49.952006 | orchestrator | 2026-03-16 01:10:49 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:49.953359 | orchestrator | 2026-03-16 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:52.998452 | orchestrator | 2026-03-16 01:10:53 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:52.999817 | orchestrator | 2026-03-16 01:10:53 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:52.999961 | orchestrator | 2026-03-16 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:56.047712 | orchestrator | 2026-03-16 01:10:56 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:56.049496 | orchestrator | 2026-03-16 01:10:56 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:56.049542 | orchestrator | 2026-03-16 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:10:59.094149 | orchestrator | 2026-03-16 01:10:59 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:10:59.097090 | orchestrator | 2026-03-16 01:10:59 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:10:59.097741 | orchestrator | 2026-03-16 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:02.136711 | orchestrator | 2026-03-16 01:11:02 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:02.138569 | orchestrator | 2026-03-16 01:11:02 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:02.138692 | orchestrator | 2026-03-16 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:05.183656 | orchestrator | 2026-03-16 01:11:05 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:05.183822 | orchestrator | 2026-03-16 01:11:05 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:05.183886 | orchestrator | 2026-03-16 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:08.225265 | orchestrator | 2026-03-16 01:11:08 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:08.227386 | orchestrator | 2026-03-16 01:11:08 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:08.227438 | orchestrator | 2026-03-16 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:11.270283 | orchestrator | 2026-03-16 01:11:11 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:11.271297 | orchestrator | 2026-03-16 01:11:11 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:11.271333 | orchestrator | 2026-03-16 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:14.318175 | orchestrator | 2026-03-16 01:11:14 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:14.319220 | orchestrator | 2026-03-16 01:11:14 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:14.319262 | orchestrator | 2026-03-16 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:17.383271 | orchestrator | 2026-03-16 01:11:17 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:17.385115 | orchestrator | 2026-03-16 01:11:17 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:17.385238 | orchestrator | 2026-03-16 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:20.435189 | orchestrator | 2026-03-16 01:11:20 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:20.436210 | orchestrator | 2026-03-16 01:11:20 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:20.436283 | orchestrator | 2026-03-16 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:23.478578 | orchestrator | 2026-03-16 01:11:23 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:23.478651 | orchestrator | 2026-03-16 01:11:23 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:23.478788 | orchestrator | 2026-03-16 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:26.520042 | orchestrator | 2026-03-16 01:11:26 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:26.523914 | orchestrator | 2026-03-16 01:11:26 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:26.523986 | orchestrator | 2026-03-16 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:29.567933 | orchestrator | 2026-03-16 01:11:29 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:29.569613 | orchestrator | 2026-03-16 01:11:29 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:29.569667 | orchestrator | 2026-03-16 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:32.617555 | orchestrator | 2026-03-16 01:11:32 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:32.619078 | orchestrator | 2026-03-16 01:11:32 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:32.619129 | orchestrator | 2026-03-16 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:35.667204 | orchestrator | 2026-03-16 01:11:35 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:35.668874 | orchestrator | 2026-03-16 01:11:35 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:35.668916 | orchestrator | 2026-03-16 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:38.710645 | orchestrator | 2026-03-16 01:11:38 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:38.711881 | orchestrator | 2026-03-16 01:11:38 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:38.712217 | orchestrator | 2026-03-16 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:41.748458 | orchestrator | 2026-03-16 01:11:41 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:41.750527 | orchestrator | 2026-03-16 01:11:41 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:41.750586 | orchestrator | 2026-03-16 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:44.805778 | orchestrator | 2026-03-16 01:11:44 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:44.805869 | orchestrator | 2026-03-16 01:11:44 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:44.805880 | orchestrator | 2026-03-16 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:47.846474 | orchestrator | 2026-03-16 01:11:47 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:47.847768 | orchestrator | 2026-03-16 01:11:47 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:47.848502 | orchestrator | 2026-03-16 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:50.897722 | orchestrator | 2026-03-16 01:11:50 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:50.900118 | orchestrator | 2026-03-16 01:11:50 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:50.900328 | orchestrator | 2026-03-16 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:53.947296 | orchestrator | 2026-03-16 01:11:53 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:53.949394 | orchestrator | 2026-03-16 01:11:53 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:53.949425 | orchestrator | 2026-03-16 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:11:56.999781 | orchestrator | 2026-03-16 01:11:57 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:11:57.001024 | orchestrator | 2026-03-16 01:11:57 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:11:57.001155 | orchestrator | 2026-03-16 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:00.038158 | orchestrator | 2026-03-16 01:12:00 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:00.040563 | orchestrator | 2026-03-16 01:12:00 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:00.040663 | orchestrator | 2026-03-16 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:03.087573 | orchestrator | 2026-03-16 01:12:03 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:03.089336 | orchestrator | 2026-03-16 01:12:03 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:03.089388 | orchestrator | 2026-03-16 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:06.133119 | orchestrator | 2026-03-16 01:12:06 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:06.133970 | orchestrator | 2026-03-16 01:12:06 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:06.134047 | orchestrator | 2026-03-16 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:09.178684 | orchestrator | 2026-03-16 01:12:09 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:09.180198 | orchestrator | 2026-03-16 01:12:09 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:09.180267 | orchestrator | 2026-03-16 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:12.223171 | orchestrator | 2026-03-16 01:12:12 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:12.224924 | orchestrator | 2026-03-16 01:12:12 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:12.224996 | orchestrator | 2026-03-16 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:15.269354 | orchestrator | 2026-03-16 01:12:15 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:15.270134 | orchestrator | 2026-03-16 01:12:15 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:15.270175 | orchestrator | 2026-03-16 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:18.313732 | orchestrator | 2026-03-16 01:12:18 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:18.313834 | orchestrator | 2026-03-16 01:12:18 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:18.314260 | orchestrator | 2026-03-16 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:21.351363 | orchestrator | 2026-03-16 01:12:21 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:21.351869 | orchestrator | 2026-03-16 01:12:21 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:21.351901 | orchestrator | 2026-03-16 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:24.383696 | orchestrator | 2026-03-16 01:12:24 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:24.384882 | orchestrator | 2026-03-16 01:12:24 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:24.384926 | orchestrator | 2026-03-16 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:27.426174 | orchestrator | 2026-03-16 01:12:27 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:27.427482 | orchestrator | 2026-03-16 01:12:27 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:27.427952 | orchestrator | 2026-03-16 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:30.474683 | orchestrator | 2026-03-16 01:12:30 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:30.476122 | orchestrator | 2026-03-16 01:12:30 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:30.476287 | orchestrator | 2026-03-16 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:33.523639 | orchestrator | 2026-03-16 01:12:33 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:33.525775 | orchestrator | 2026-03-16 01:12:33 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:33.526246 | orchestrator | 2026-03-16 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:36.578131 | orchestrator | 2026-03-16 01:12:36 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:36.578337 | orchestrator | 2026-03-16 01:12:36 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:36.578356 | orchestrator | 2026-03-16 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:39.630944 | orchestrator | 2026-03-16 01:12:39 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:39.632555 | orchestrator | 2026-03-16 01:12:39 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:39.632611 | orchestrator | 2026-03-16 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:42.661334 | orchestrator | 2026-03-16 01:12:42 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:42.661568 | orchestrator | 2026-03-16 01:12:42 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:42.661619 | orchestrator | 2026-03-16 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:45.699824 | orchestrator | 2026-03-16 01:12:45 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:45.701436 | orchestrator | 2026-03-16 01:12:45 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:45.701887 | orchestrator | 2026-03-16 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:48.743150 | orchestrator | 2026-03-16 01:12:48 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:48.743232 | orchestrator | 2026-03-16 01:12:48 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:48.743238 | orchestrator | 2026-03-16 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:51.774463 | orchestrator | 2026-03-16 01:12:51 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:51.775708 | orchestrator | 2026-03-16 01:12:51 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:51.775763 | orchestrator | 2026-03-16 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:54.809142 | orchestrator | 2026-03-16 01:12:54 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:54.810513 | orchestrator | 2026-03-16 01:12:54 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:54.810542 | orchestrator | 2026-03-16 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:12:57.855071 | orchestrator | 2026-03-16 01:12:57 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:12:57.856032 | orchestrator | 2026-03-16 01:12:57 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:12:57.856381 | orchestrator | 2026-03-16 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:00.897201 | orchestrator | 2026-03-16 01:13:00 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:00.898794 | orchestrator | 2026-03-16 01:13:00 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:00.898940 | orchestrator | 2026-03-16 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:03.934118 | orchestrator | 2026-03-16 01:13:03 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:03.937361 | orchestrator | 2026-03-16 01:13:03 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:03.937408 | orchestrator | 2026-03-16 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:06.980614 | orchestrator | 2026-03-16 01:13:06 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:06.982557 | orchestrator | 2026-03-16 01:13:06 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:06.982606 | orchestrator | 2026-03-16 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:10.024364 | orchestrator | 2026-03-16 01:13:10 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:10.024476 | orchestrator | 2026-03-16 01:13:10 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:10.024526 | orchestrator | 2026-03-16 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:13.060219 | orchestrator | 2026-03-16 01:13:13 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:13.060755 | orchestrator | 2026-03-16 01:13:13 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:13.060790 | orchestrator | 2026-03-16 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:16.104848 | orchestrator | 2026-03-16 01:13:16 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:16.104937 | orchestrator | 2026-03-16 01:13:16 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:16.104947 | orchestrator | 2026-03-16 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:19.122264 | orchestrator | 2026-03-16 01:13:19 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:19.123567 | orchestrator | 2026-03-16 01:13:19 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:19.123630 | orchestrator | 2026-03-16 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:22.166804 | orchestrator | 2026-03-16 01:13:22 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:22.167583 | orchestrator | 2026-03-16 01:13:22 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:22.167620 | orchestrator | 2026-03-16 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:25.211801 | orchestrator | 2026-03-16 01:13:25 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:25.213860 | orchestrator | 2026-03-16 01:13:25 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:25.213917 | orchestrator | 2026-03-16 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:28.264355 | orchestrator | 2026-03-16 01:13:28 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:28.264428 | orchestrator | 2026-03-16 01:13:28 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:28.264465 | orchestrator | 2026-03-16 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:31.314061 | orchestrator | 2026-03-16 01:13:31 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:31.316154 | orchestrator | 2026-03-16 01:13:31 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:31.316230 | orchestrator | 2026-03-16 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:34.361486 | orchestrator | 2026-03-16 01:13:34 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:34.363461 | orchestrator | 2026-03-16 01:13:34 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:34.363546 | orchestrator | 2026-03-16 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:37.413152 | orchestrator | 2026-03-16 01:13:37 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:37.415346 | orchestrator | 2026-03-16 01:13:37 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:37.415400 | orchestrator | 2026-03-16 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:40.449678 | orchestrator | 2026-03-16 01:13:40 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:40.452575 | orchestrator | 2026-03-16 01:13:40 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:40.452646 | orchestrator | 2026-03-16 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:43.503026 | orchestrator | 2026-03-16 01:13:43 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:43.505608 | orchestrator | 2026-03-16 01:13:43 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:43.505656 | orchestrator | 2026-03-16 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:46.553251 | orchestrator | 2026-03-16 01:13:46 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:46.555390 | orchestrator | 2026-03-16 01:13:46 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:46.555524 | orchestrator | 2026-03-16 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:49.604161 | orchestrator | 2026-03-16 01:13:49 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:49.606884 | orchestrator | 2026-03-16 01:13:49 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:49.607514 | orchestrator | 2026-03-16 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:52.657469 | orchestrator | 2026-03-16 01:13:52 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:52.657609 | orchestrator | 2026-03-16 01:13:52 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:52.657626 | orchestrator | 2026-03-16 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:55.705127 | orchestrator | 2026-03-16 01:13:55 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:55.705984 | orchestrator | 2026-03-16 01:13:55 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:55.706071 | orchestrator | 2026-03-16 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:13:58.749259 | orchestrator | 2026-03-16 01:13:58 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:13:58.750084 | orchestrator | 2026-03-16 01:13:58 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state STARTED 2026-03-16 01:13:58.750152 | orchestrator | 2026-03-16 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:14:01.798660 | orchestrator | 2026-03-16 01:14:01 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:14:01.801328 | orchestrator | 2026-03-16 01:14:01 | INFO  | Task f03cd469-304b-4552-8070-c1e2eb2af1f1 is in state SUCCESS 2026-03-16 01:14:01.803957 | orchestrator | 2026-03-16 01:14:01.804039 | orchestrator | 2026-03-16 01:14:01.804059 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:14:01.804069 | orchestrator | 2026-03-16 01:14:01.804077 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-16 01:14:01.804085 | orchestrator | Monday 16 March 2026 01:05:18 +0000 (0:00:00.310) 0:00:00.310 ********** 2026-03-16 01:14:01.804093 | orchestrator | changed: [testbed-manager] 2026-03-16 01:14:01.804101 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.804108 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:01.804115 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:01.804123 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.804130 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.804137 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.804163 | orchestrator | 2026-03-16 01:14:01.804171 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:14:01.804179 | orchestrator | Monday 16 March 2026 01:05:19 +0000 (0:00:01.166) 0:00:01.477 ********** 2026-03-16 01:14:01.804216 | orchestrator | changed: [testbed-manager] 2026-03-16 01:14:01.804225 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.804233 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:01.804240 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:01.804248 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.804255 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.804262 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.804270 | orchestrator | 2026-03-16 01:14:01.804277 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:14:01.804285 | orchestrator | Monday 16 March 2026 01:05:20 +0000 (0:00:01.085) 0:00:02.562 ********** 2026-03-16 01:14:01.804293 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-16 01:14:01.804300 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-16 01:14:01.804308 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-16 01:14:01.804315 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-16 01:14:01.804323 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-16 01:14:01.804330 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-16 01:14:01.804337 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-16 01:14:01.804345 | orchestrator | 2026-03-16 01:14:01.804353 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-16 01:14:01.804360 | orchestrator | 2026-03-16 01:14:01.804368 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-16 01:14:01.804375 | orchestrator | Monday 16 March 2026 01:05:21 +0000 (0:00:01.307) 0:00:03.870 ********** 2026-03-16 01:14:01.804383 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:14:01.804390 | orchestrator | 2026-03-16 01:14:01.804398 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-16 01:14:01.804405 | orchestrator | Monday 16 March 2026 01:05:22 +0000 (0:00:00.536) 0:00:04.406 ********** 2026-03-16 01:14:01.804413 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-16 01:14:01.804421 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-16 01:14:01.804428 | orchestrator | 2026-03-16 01:14:01.804436 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-16 01:14:01.804443 | orchestrator | Monday 16 March 2026 01:05:27 +0000 (0:00:04.971) 0:00:09.378 ********** 2026-03-16 01:14:01.804451 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-16 01:14:01.804477 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-16 01:14:01.804485 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.804492 | orchestrator | 2026-03-16 01:14:01.804500 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-16 01:14:01.804507 | orchestrator | Monday 16 March 2026 01:05:31 +0000 (0:00:04.084) 0:00:13.462 ********** 2026-03-16 01:14:01.804514 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.804522 | orchestrator | 2026-03-16 01:14:01.804531 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-16 01:14:01.804549 | orchestrator | Monday 16 March 2026 01:05:31 +0000 (0:00:00.617) 0:00:14.080 ********** 2026-03-16 01:14:01.804557 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.804566 | orchestrator | 2026-03-16 01:14:01.804574 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-16 01:14:01.804583 | orchestrator | Monday 16 March 2026 01:05:33 +0000 (0:00:01.450) 0:00:15.530 ********** 2026-03-16 01:14:01.804679 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.804688 | orchestrator | 2026-03-16 01:14:01.804741 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-16 01:14:01.804750 | orchestrator | Monday 16 March 2026 01:05:36 +0000 (0:00:02.683) 0:00:18.214 ********** 2026-03-16 01:14:01.804760 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.804769 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.804777 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.804786 | orchestrator | 2026-03-16 01:14:01.804794 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-16 01:14:01.804803 | orchestrator | Monday 16 March 2026 01:05:37 +0000 (0:00:00.991) 0:00:19.205 ********** 2026-03-16 01:14:01.804812 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:01.804820 | orchestrator | 2026-03-16 01:14:01.804829 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-16 01:14:01.804841 | orchestrator | Monday 16 March 2026 01:06:09 +0000 (0:00:32.496) 0:00:51.702 ********** 2026-03-16 01:14:01.804855 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.804874 | orchestrator | 2026-03-16 01:14:01.804885 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-16 01:14:01.804897 | orchestrator | Monday 16 March 2026 01:06:25 +0000 (0:00:15.931) 0:01:07.634 ********** 2026-03-16 01:14:01.804909 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:01.804920 | orchestrator | 2026-03-16 01:14:01.804931 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-16 01:14:01.804944 | orchestrator | Monday 16 March 2026 01:06:40 +0000 (0:00:15.004) 0:01:22.639 ********** 2026-03-16 01:14:01.804973 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:01.805004 | orchestrator | 2026-03-16 01:14:01.805017 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-16 01:14:01.805030 | orchestrator | Monday 16 March 2026 01:06:41 +0000 (0:00:01.077) 0:01:23.717 ********** 2026-03-16 01:14:01.805043 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.805055 | orchestrator | 2026-03-16 01:14:01.805068 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-16 01:14:01.805081 | orchestrator | Monday 16 March 2026 01:06:42 +0000 (0:00:00.455) 0:01:24.172 ********** 2026-03-16 01:14:01.805094 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:14:01.805107 | orchestrator | 2026-03-16 01:14:01.805120 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-16 01:14:01.805134 | orchestrator | Monday 16 March 2026 01:06:42 +0000 (0:00:00.472) 0:01:24.645 ********** 2026-03-16 01:14:01.805146 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:01.805158 | orchestrator | 2026-03-16 01:14:01.805171 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-16 01:14:01.805184 | orchestrator | Monday 16 March 2026 01:07:03 +0000 (0:00:21.274) 0:01:45.919 ********** 2026-03-16 01:14:01.805274 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.805288 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.805302 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.805341 | orchestrator | 2026-03-16 01:14:01.805352 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-16 01:14:01.805364 | orchestrator | 2026-03-16 01:14:01.805512 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-16 01:14:01.805527 | orchestrator | Monday 16 March 2026 01:07:04 +0000 (0:00:00.283) 0:01:46.203 ********** 2026-03-16 01:14:01.805539 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:14:01.805551 | orchestrator | 2026-03-16 01:14:01.805558 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-16 01:14:01.805566 | orchestrator | Monday 16 March 2026 01:07:04 +0000 (0:00:00.458) 0:01:46.661 ********** 2026-03-16 01:14:01.805573 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.805585 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.805595 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.805602 | orchestrator | 2026-03-16 01:14:01.805610 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-16 01:14:01.805617 | orchestrator | Monday 16 March 2026 01:07:06 +0000 (0:00:02.434) 0:01:49.096 ********** 2026-03-16 01:14:01.805624 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.805632 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.805639 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.805646 | orchestrator | 2026-03-16 01:14:01.805653 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-16 01:14:01.805661 | orchestrator | Monday 16 March 2026 01:07:09 +0000 (0:00:02.718) 0:01:51.814 ********** 2026-03-16 01:14:01.805668 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.805687 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.805695 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.805702 | orchestrator | 2026-03-16 01:14:01.805709 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-16 01:14:01.805716 | orchestrator | Monday 16 March 2026 01:07:09 +0000 (0:00:00.343) 0:01:52.158 ********** 2026-03-16 01:14:01.805724 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-16 01:14:01.805731 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.805738 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-16 01:14:01.805763 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.805771 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-16 01:14:01.805778 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-16 01:14:01.805846 | orchestrator | 2026-03-16 01:14:01.805860 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-16 01:14:01.805882 | orchestrator | Monday 16 March 2026 01:07:17 +0000 (0:00:07.648) 0:01:59.806 ********** 2026-03-16 01:14:01.805893 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.805903 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.805914 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.805925 | orchestrator | 2026-03-16 01:14:01.805936 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-16 01:14:01.805948 | orchestrator | Monday 16 March 2026 01:07:18 +0000 (0:00:00.474) 0:02:00.280 ********** 2026-03-16 01:14:01.805960 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-16 01:14:01.805971 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.805981 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-16 01:14:01.805993 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.806004 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-16 01:14:01.806062 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.806074 | orchestrator | 2026-03-16 01:14:01.806086 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-16 01:14:01.806109 | orchestrator | Monday 16 March 2026 01:07:19 +0000 (0:00:00.914) 0:02:01.195 ********** 2026-03-16 01:14:01.806123 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.806135 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.806146 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.806157 | orchestrator | 2026-03-16 01:14:01.806170 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-16 01:14:01.806183 | orchestrator | Monday 16 March 2026 01:07:19 +0000 (0:00:00.804) 0:02:01.999 ********** 2026-03-16 01:14:01.806214 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.806226 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.806238 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.806250 | orchestrator | 2026-03-16 01:14:01.806262 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-16 01:14:01.806274 | orchestrator | Monday 16 March 2026 01:07:20 +0000 (0:00:01.152) 0:02:03.152 ********** 2026-03-16 01:14:01.806282 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.806291 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.806322 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.806337 | orchestrator | 2026-03-16 01:14:01.806348 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-16 01:14:01.806360 | orchestrator | Monday 16 March 2026 01:07:23 +0000 (0:00:02.146) 0:02:05.298 ********** 2026-03-16 01:14:01.806371 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.806383 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.806394 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:01.806404 | orchestrator | 2026-03-16 01:14:01.806410 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-16 01:14:01.806417 | orchestrator | Monday 16 March 2026 01:07:43 +0000 (0:00:20.510) 0:02:25.809 ********** 2026-03-16 01:14:01.806424 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.806430 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.806437 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:01.806443 | orchestrator | 2026-03-16 01:14:01.806450 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-16 01:14:01.806457 | orchestrator | Monday 16 March 2026 01:07:58 +0000 (0:00:14.852) 0:02:40.662 ********** 2026-03-16 01:14:01.806463 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:01.806470 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.806476 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.806483 | orchestrator | 2026-03-16 01:14:01.806490 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-16 01:14:01.806496 | orchestrator | Monday 16 March 2026 01:07:59 +0000 (0:00:00.887) 0:02:41.549 ********** 2026-03-16 01:14:01.806503 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.806509 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.806516 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.806522 | orchestrator | 2026-03-16 01:14:01.806529 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-16 01:14:01.806536 | orchestrator | Monday 16 March 2026 01:08:14 +0000 (0:00:14.906) 0:02:56.456 ********** 2026-03-16 01:14:01.806543 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.806549 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.806556 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.806562 | orchestrator | 2026-03-16 01:14:01.806570 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-16 01:14:01.806581 | orchestrator | Monday 16 March 2026 01:08:15 +0000 (0:00:01.113) 0:02:57.570 ********** 2026-03-16 01:14:01.806605 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.806616 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.806628 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.806639 | orchestrator | 2026-03-16 01:14:01.806651 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-16 01:14:01.806663 | orchestrator | 2026-03-16 01:14:01.806670 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-16 01:14:01.806684 | orchestrator | Monday 16 March 2026 01:08:15 +0000 (0:00:00.498) 0:02:58.068 ********** 2026-03-16 01:14:01.806691 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:14:01.806704 | orchestrator | 2026-03-16 01:14:01.806715 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-16 01:14:01.806726 | orchestrator | Monday 16 March 2026 01:08:16 +0000 (0:00:00.601) 0:02:58.669 ********** 2026-03-16 01:14:01.806737 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-16 01:14:01.806746 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-16 01:14:01.806753 | orchestrator | 2026-03-16 01:14:01.806760 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-16 01:14:01.806766 | orchestrator | Monday 16 March 2026 01:08:20 +0000 (0:00:03.782) 0:03:02.451 ********** 2026-03-16 01:14:01.806773 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-16 01:14:01.806787 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-16 01:14:01.806794 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-16 01:14:01.806801 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-16 01:14:01.806808 | orchestrator | 2026-03-16 01:14:01.806815 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-16 01:14:01.806826 | orchestrator | Monday 16 March 2026 01:08:26 +0000 (0:00:06.695) 0:03:09.146 ********** 2026-03-16 01:14:01.806835 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-16 01:14:01.806851 | orchestrator | 2026-03-16 01:14:01.806864 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-16 01:14:01.806875 | orchestrator | Monday 16 March 2026 01:08:30 +0000 (0:00:03.097) 0:03:12.244 ********** 2026-03-16 01:14:01.806886 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-16 01:14:01.806897 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-16 01:14:01.806908 | orchestrator | 2026-03-16 01:14:01.806919 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-16 01:14:01.806929 | orchestrator | Monday 16 March 2026 01:08:34 +0000 (0:00:04.595) 0:03:16.840 ********** 2026-03-16 01:14:01.806940 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-16 01:14:01.806951 | orchestrator | 2026-03-16 01:14:01.806962 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-16 01:14:01.806973 | orchestrator | Monday 16 March 2026 01:08:38 +0000 (0:00:03.569) 0:03:20.409 ********** 2026-03-16 01:14:01.806985 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-16 01:14:01.806997 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-16 01:14:01.807008 | orchestrator | 2026-03-16 01:14:01.807020 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-16 01:14:01.807050 | orchestrator | Monday 16 March 2026 01:08:46 +0000 (0:00:07.779) 0:03:28.188 ********** 2026-03-16 01:14:01.807062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.807080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.807093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.807108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.807117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.807128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.807136 | orchestrator | 2026-03-16 01:14:01.807148 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-16 01:14:01.807159 | orchestrator | Monday 16 March 2026 01:08:47 +0000 (0:00:01.350) 0:03:29.538 ********** 2026-03-16 01:14:01.807170 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.807181 | orchestrator | 2026-03-16 01:14:01.807211 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-16 01:14:01.807222 | orchestrator | Monday 16 March 2026 01:08:47 +0000 (0:00:00.148) 0:03:29.687 ********** 2026-03-16 01:14:01.807233 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.807456 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.807479 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.807489 | orchestrator | 2026-03-16 01:14:01.807496 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-16 01:14:01.807503 | orchestrator | Monday 16 March 2026 01:08:48 +0000 (0:00:00.489) 0:03:30.176 ********** 2026-03-16 01:14:01.807509 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-16 01:14:01.807516 | orchestrator | 2026-03-16 01:14:01.807523 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-16 01:14:01.807529 | orchestrator | Monday 16 March 2026 01:08:48 +0000 (0:00:00.680) 0:03:30.857 ********** 2026-03-16 01:14:01.807536 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.807543 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.807549 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.807557 | orchestrator | 2026-03-16 01:14:01.807568 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-16 01:14:01.807580 | orchestrator | Monday 16 March 2026 01:08:49 +0000 (0:00:00.347) 0:03:31.204 ********** 2026-03-16 01:14:01.807597 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:14:01.807609 | orchestrator | 2026-03-16 01:14:01.807620 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-16 01:14:01.807632 | orchestrator | Monday 16 March 2026 01:08:49 +0000 (0:00:00.538) 0:03:31.743 ********** 2026-03-16 01:14:01.807653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.807675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.807689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.807705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.807717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.807743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.807754 | orchestrator | 2026-03-16 01:14:01.807766 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-16 01:14:01.807777 | orchestrator | Monday 16 March 2026 01:08:52 +0000 (0:00:02.783) 0:03:34.526 ********** 2026-03-16 01:14:01.807789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 01:14:01.807802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.807813 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.807835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 01:14:01.807860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.807871 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.807883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 01:14:01.807927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.807942 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.807954 | orchestrator | 2026-03-16 01:14:01.807966 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-16 01:14:01.807978 | orchestrator | Monday 16 March 2026 01:08:52 +0000 (0:00:00.594) 0:03:35.121 ********** 2026-03-16 01:14:01.808018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 01:14:01.808040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.808053 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.808100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 01:14:01.808114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.808126 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.808143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 01:14:01.808162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.808212 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.808226 | orchestrator | 2026-03-16 01:14:01.808239 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-16 01:14:01.808251 | orchestrator | Monday 16 March 2026 01:08:53 +0000 (0:00:00.816) 0:03:35.937 ********** 2026-03-16 01:14:01.808273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.808287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.808305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.808332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.808346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.808358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.808370 | orchestrator | 2026-03-16 01:14:01.808382 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-16 01:14:01.808394 | orchestrator | Monday 16 March 2026 01:08:56 +0000 (0:00:02.685) 0:03:38.622 ********** 2026-03-16 01:14:01.808407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.808430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.808449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.808462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.808475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.808492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.808512 | orchestrator | 2026-03-16 01:14:01.808524 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-16 01:14:01.808537 | orchestrator | Monday 16 March 2026 01:09:02 +0000 (0:00:05.781) 0:03:44.403 ********** 2026-03-16 01:14:01.808556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 01:14:01.808569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.808581 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.808593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 01:14:01.808606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.808623 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.808639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-16 01:14:01.808689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.808701 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.808712 | orchestrator | 2026-03-16 01:14:01.808725 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-16 01:14:01.808737 | orchestrator | Monday 16 March 2026 01:09:02 +0000 (0:00:00.611) 0:03:45.015 ********** 2026-03-16 01:14:01.808749 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.808761 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:01.808773 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:01.808785 | orchestrator | 2026-03-16 01:14:01.808797 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-16 01:14:01.808808 | orchestrator | Monday 16 March 2026 01:09:04 +0000 (0:00:01.615) 0:03:46.630 ********** 2026-03-16 01:14:01.808821 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.808832 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.808844 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.808855 | orchestrator | 2026-03-16 01:14:01.808869 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-16 01:14:01.808881 | orchestrator | Monday 16 March 2026 01:09:04 +0000 (0:00:00.333) 0:03:46.964 ********** 2026-03-16 01:14:01.808894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.808925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.808963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:01.808976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.808988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.809006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.809018 | orchestrator | 2026-03-16 01:14:01.809030 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-16 01:14:01.809046 | orchestrator | Monday 16 March 2026 01:09:06 +0000 (0:00:02.158) 0:03:49.122 ********** 2026-03-16 01:14:01.809057 | orchestrator | 2026-03-16 01:14:01.809069 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-16 01:14:01.809081 | orchestrator | Monday 16 March 2026 01:09:07 +0000 (0:00:00.135) 0:03:49.257 ********** 2026-03-16 01:14:01.809092 | orchestrator | 2026-03-16 01:14:01.809104 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-16 01:14:01.809117 | orchestrator | Monday 16 March 2026 01:09:07 +0000 (0:00:00.132) 0:03:49.390 ********** 2026-03-16 01:14:01.809129 | orchestrator | 2026-03-16 01:14:01.809140 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-16 01:14:01.809151 | orchestrator | Monday 16 March 2026 01:09:07 +0000 (0:00:00.162) 0:03:49.552 ********** 2026-03-16 01:14:01.809163 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.809176 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:01.809203 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:01.809216 | orchestrator | 2026-03-16 01:14:01.809227 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-16 01:14:01.809237 | orchestrator | Monday 16 March 2026 01:09:24 +0000 (0:00:16.982) 0:04:06.535 ********** 2026-03-16 01:14:01.809248 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.809259 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:01.809270 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:01.809281 | orchestrator | 2026-03-16 01:14:01.809292 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-16 01:14:01.809303 | orchestrator | 2026-03-16 01:14:01.809314 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-16 01:14:01.809325 | orchestrator | Monday 16 March 2026 01:09:36 +0000 (0:00:11.763) 0:04:18.299 ********** 2026-03-16 01:14:01.809337 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:14:01.809348 | orchestrator | 2026-03-16 01:14:01.809366 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-16 01:14:01.809378 | orchestrator | Monday 16 March 2026 01:09:37 +0000 (0:00:01.222) 0:04:19.522 ********** 2026-03-16 01:14:01.809389 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.809400 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.809411 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.809422 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.809449 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.809461 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.809472 | orchestrator | 2026-03-16 01:14:01.809483 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-16 01:14:01.809493 | orchestrator | Monday 16 March 2026 01:09:37 +0000 (0:00:00.584) 0:04:20.107 ********** 2026-03-16 01:14:01.809514 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.809525 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.809536 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.809547 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 01:14:01.809558 | orchestrator | 2026-03-16 01:14:01.809569 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-16 01:14:01.809580 | orchestrator | Monday 16 March 2026 01:09:39 +0000 (0:00:01.139) 0:04:21.246 ********** 2026-03-16 01:14:01.809591 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-16 01:14:01.809602 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-16 01:14:01.809613 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-16 01:14:01.809624 | orchestrator | 2026-03-16 01:14:01.809635 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-16 01:14:01.809646 | orchestrator | Monday 16 March 2026 01:09:39 +0000 (0:00:00.714) 0:04:21.960 ********** 2026-03-16 01:14:01.809657 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-16 01:14:01.809668 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-16 01:14:01.809680 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-16 01:14:01.809691 | orchestrator | 2026-03-16 01:14:01.809702 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-16 01:14:01.809713 | orchestrator | Monday 16 March 2026 01:09:41 +0000 (0:00:01.481) 0:04:23.442 ********** 2026-03-16 01:14:01.809724 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-16 01:14:01.809735 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.809746 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-16 01:14:01.809757 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.809767 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-16 01:14:01.809778 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.809790 | orchestrator | 2026-03-16 01:14:01.809801 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-16 01:14:01.809812 | orchestrator | Monday 16 March 2026 01:09:41 +0000 (0:00:00.559) 0:04:24.002 ********** 2026-03-16 01:14:01.809823 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-16 01:14:01.809834 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-16 01:14:01.809845 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.809855 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-16 01:14:01.809866 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-16 01:14:01.809876 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.809886 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-16 01:14:01.809897 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-16 01:14:01.809907 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.809918 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-16 01:14:01.809933 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-16 01:14:01.809944 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-16 01:14:01.809954 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-16 01:14:01.809965 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-16 01:14:01.809976 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-16 01:14:01.809987 | orchestrator | 2026-03-16 01:14:01.809998 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-16 01:14:01.810692 | orchestrator | Monday 16 March 2026 01:09:43 +0000 (0:00:01.267) 0:04:25.269 ********** 2026-03-16 01:14:01.810718 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.810730 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.810741 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.810753 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.810764 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.810775 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.810786 | orchestrator | 2026-03-16 01:14:01.810798 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-16 01:14:01.810810 | orchestrator | Monday 16 March 2026 01:09:44 +0000 (0:00:01.373) 0:04:26.643 ********** 2026-03-16 01:14:01.810821 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.810832 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.810843 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.810854 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.810865 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.810876 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.810886 | orchestrator | 2026-03-16 01:14:01.810896 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-16 01:14:01.810907 | orchestrator | Monday 16 March 2026 01:09:46 +0000 (0:00:02.015) 0:04:28.659 ********** 2026-03-16 01:14:01.810936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.810949 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.810960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.810987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.810999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811233 | orchestrator | 2026-03-16 01:14:01.811248 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-16 01:14:01.811260 | orchestrator | Monday 16 March 2026 01:09:48 +0000 (0:00:02.158) 0:04:30.817 ********** 2026-03-16 01:14:01.811274 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:14:01.811288 | orchestrator | 2026-03-16 01:14:01.811301 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-16 01:14:01.811314 | orchestrator | Monday 16 March 2026 01:09:49 +0000 (0:00:01.194) 0:04:32.011 ********** 2026-03-16 01:14:01.811333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811449 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811539 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.811550 | orchestrator | 2026-03-16 01:14:01.811561 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-16 01:14:01.811573 | orchestrator | Monday 16 March 2026 01:09:53 +0000 (0:00:03.561) 0:04:35.573 ********** 2026-03-16 01:14:01.811584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.811602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.811617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.811627 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.811643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.811656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.811668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.811688 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.811699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.811713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.811724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.811735 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.811751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-16 01:14:01.811762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.811773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-16 01:14:01.811788 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.811799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.811809 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.811823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-16 01:14:01.811834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.811844 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.811854 | orchestrator | 2026-03-16 01:14:01.811865 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-16 01:14:01.811875 | orchestrator | Monday 16 March 2026 01:09:54 +0000 (0:00:01.553) 0:04:37.127 ********** 2026-03-16 01:14:01.811893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timez2026-03-16 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:14:01.811904 | orchestrator | one:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.811915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.811932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.811942 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.811956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.811967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.811983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.811994 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.812005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.812022 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.812031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.812041 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.812056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-16 01:14:01.812065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.812075 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.812091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-16 01:14:01.812108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.812118 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.812129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-16 01:14:01.812139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.812149 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.812158 | orchestrator | 2026-03-16 01:14:01.812169 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-16 01:14:01.812179 | orchestrator | Monday 16 March 2026 01:09:57 +0000 (0:00:02.369) 0:04:39.496 ********** 2026-03-16 01:14:01.812209 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.812220 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.812230 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.812240 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-16 01:14:01.812251 | orchestrator | 2026-03-16 01:14:01.812260 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-16 01:14:01.812275 | orchestrator | Monday 16 March 2026 01:09:58 +0000 (0:00:01.120) 0:04:40.617 ********** 2026-03-16 01:14:01.812284 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-16 01:14:01.812293 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-16 01:14:01.812302 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-16 01:14:01.812312 | orchestrator | 2026-03-16 01:14:01.812323 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-16 01:14:01.812334 | orchestrator | Monday 16 March 2026 01:09:59 +0000 (0:00:01.031) 0:04:41.648 ********** 2026-03-16 01:14:01.812345 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-16 01:14:01.812355 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-16 01:14:01.812364 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-16 01:14:01.812374 | orchestrator | 2026-03-16 01:14:01.812385 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-16 01:14:01.812394 | orchestrator | Monday 16 March 2026 01:10:00 +0000 (0:00:00.931) 0:04:42.579 ********** 2026-03-16 01:14:01.812405 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:14:01.812416 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:14:01.812426 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:14:01.812437 | orchestrator | 2026-03-16 01:14:01.812448 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-16 01:14:01.812464 | orchestrator | Monday 16 March 2026 01:10:00 +0000 (0:00:00.503) 0:04:43.082 ********** 2026-03-16 01:14:01.812474 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:14:01.812484 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:14:01.812494 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:14:01.812504 | orchestrator | 2026-03-16 01:14:01.812516 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-16 01:14:01.812526 | orchestrator | Monday 16 March 2026 01:10:01 +0000 (0:00:00.730) 0:04:43.812 ********** 2026-03-16 01:14:01.812536 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-16 01:14:01.812546 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-16 01:14:01.812557 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-16 01:14:01.812567 | orchestrator | 2026-03-16 01:14:01.812584 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-16 01:14:01.812595 | orchestrator | Monday 16 March 2026 01:10:02 +0000 (0:00:01.225) 0:04:45.038 ********** 2026-03-16 01:14:01.812606 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-16 01:14:01.812617 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-16 01:14:01.812627 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-16 01:14:01.812637 | orchestrator | 2026-03-16 01:14:01.812647 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-16 01:14:01.812657 | orchestrator | Monday 16 March 2026 01:10:04 +0000 (0:00:01.240) 0:04:46.278 ********** 2026-03-16 01:14:01.812666 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-16 01:14:01.812676 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-16 01:14:01.812687 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-16 01:14:01.812698 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-16 01:14:01.812707 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-16 01:14:01.812717 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-16 01:14:01.812727 | orchestrator | 2026-03-16 01:14:01.812737 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-16 01:14:01.812747 | orchestrator | Monday 16 March 2026 01:10:08 +0000 (0:00:04.028) 0:04:50.307 ********** 2026-03-16 01:14:01.812755 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.812765 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.812775 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.812786 | orchestrator | 2026-03-16 01:14:01.812795 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-16 01:14:01.812806 | orchestrator | Monday 16 March 2026 01:10:08 +0000 (0:00:00.540) 0:04:50.847 ********** 2026-03-16 01:14:01.812815 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.812825 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.812835 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.812845 | orchestrator | 2026-03-16 01:14:01.812855 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-16 01:14:01.812866 | orchestrator | Monday 16 March 2026 01:10:09 +0000 (0:00:00.358) 0:04:51.206 ********** 2026-03-16 01:14:01.812877 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.812887 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.812897 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.812907 | orchestrator | 2026-03-16 01:14:01.812915 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-16 01:14:01.812925 | orchestrator | Monday 16 March 2026 01:10:10 +0000 (0:00:01.375) 0:04:52.582 ********** 2026-03-16 01:14:01.812935 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-16 01:14:01.812947 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-16 01:14:01.812965 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-16 01:14:01.812975 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-16 01:14:01.812985 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-16 01:14:01.812994 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-16 01:14:01.813005 | orchestrator | 2026-03-16 01:14:01.813025 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-16 01:14:01.813035 | orchestrator | Monday 16 March 2026 01:10:13 +0000 (0:00:03.398) 0:04:55.980 ********** 2026-03-16 01:14:01.813046 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-16 01:14:01.813057 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-16 01:14:01.813066 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-16 01:14:01.813076 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-16 01:14:01.813086 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.813097 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-16 01:14:01.813107 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.813118 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-16 01:14:01.813129 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.813140 | orchestrator | 2026-03-16 01:14:01.813150 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-16 01:14:01.813161 | orchestrator | Monday 16 March 2026 01:10:17 +0000 (0:00:03.811) 0:04:59.792 ********** 2026-03-16 01:14:01.813171 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.813181 | orchestrator | 2026-03-16 01:14:01.813205 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-16 01:14:01.813215 | orchestrator | Monday 16 March 2026 01:10:17 +0000 (0:00:00.140) 0:04:59.933 ********** 2026-03-16 01:14:01.813225 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.813235 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.813244 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.813255 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.813265 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.813275 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.813285 | orchestrator | 2026-03-16 01:14:01.813295 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-16 01:14:01.813305 | orchestrator | Monday 16 March 2026 01:10:18 +0000 (0:00:00.584) 0:05:00.517 ********** 2026-03-16 01:14:01.813315 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-16 01:14:01.813321 | orchestrator | 2026-03-16 01:14:01.813337 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-16 01:14:01.813344 | orchestrator | Monday 16 March 2026 01:10:19 +0000 (0:00:00.714) 0:05:01.232 ********** 2026-03-16 01:14:01.813350 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.813355 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.813361 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.813367 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.813373 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.813378 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.813384 | orchestrator | 2026-03-16 01:14:01.813390 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-16 01:14:01.813396 | orchestrator | Monday 16 March 2026 01:10:19 +0000 (0:00:00.874) 0:05:02.106 ********** 2026-03-16 01:14:01.813403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813581 | orchestrator | 2026-03-16 01:14:01.813587 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-16 01:14:01.813593 | orchestrator | Monday 16 March 2026 01:10:23 +0000 (0:00:04.026) 0:05:06.133 ********** 2026-03-16 01:14:01.813599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.813611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.813622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.813715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.813730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.813745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.813756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.813860 | orchestrator | 2026-03-16 01:14:01.813866 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-16 01:14:01.813872 | orchestrator | Monday 16 March 2026 01:10:30 +0000 (0:00:06.820) 0:05:12.953 ********** 2026-03-16 01:14:01.813878 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.813889 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.813899 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.813910 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.813924 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.813933 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.813942 | orchestrator | 2026-03-16 01:14:01.813953 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-16 01:14:01.813962 | orchestrator | Monday 16 March 2026 01:10:32 +0000 (0:00:01.299) 0:05:14.253 ********** 2026-03-16 01:14:01.813971 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-16 01:14:01.813980 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-16 01:14:01.813990 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-16 01:14:01.814000 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-16 01:14:01.814009 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-16 01:14:01.814050 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-16 01:14:01.814060 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-16 01:14:01.814069 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.814078 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-16 01:14:01.814086 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.814095 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-16 01:14:01.814104 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.814117 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-16 01:14:01.814127 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-16 01:14:01.814136 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-16 01:14:01.814146 | orchestrator | 2026-03-16 01:14:01.814157 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-16 01:14:01.814166 | orchestrator | Monday 16 March 2026 01:10:35 +0000 (0:00:03.768) 0:05:18.021 ********** 2026-03-16 01:14:01.814180 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.814238 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.814250 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.814260 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.814270 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.814280 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.814290 | orchestrator | 2026-03-16 01:14:01.814299 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-16 01:14:01.814308 | orchestrator | Monday 16 March 2026 01:10:36 +0000 (0:00:00.645) 0:05:18.667 ********** 2026-03-16 01:14:01.814319 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-16 01:14:01.814329 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-16 01:14:01.814336 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-16 01:14:01.814342 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-16 01:14:01.814390 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-16 01:14:01.814397 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-16 01:14:01.814403 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-16 01:14:01.814409 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-16 01:14:01.814414 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-16 01:14:01.814420 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-16 01:14:01.814426 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.814431 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-16 01:14:01.814437 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.814443 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-16 01:14:01.814448 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.814456 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-16 01:14:01.814466 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-16 01:14:01.814480 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-16 01:14:01.814491 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-16 01:14:01.814509 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-16 01:14:01.814520 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-16 01:14:01.814529 | orchestrator | 2026-03-16 01:14:01.814540 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-16 01:14:01.814548 | orchestrator | Monday 16 March 2026 01:10:41 +0000 (0:00:05.108) 0:05:23.775 ********** 2026-03-16 01:14:01.814554 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-16 01:14:01.814559 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-16 01:14:01.814571 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-16 01:14:01.814577 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-16 01:14:01.814583 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-16 01:14:01.814589 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-16 01:14:01.814594 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-16 01:14:01.814600 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-16 01:14:01.814606 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-16 01:14:01.814619 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-16 01:14:01.814625 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-16 01:14:01.814630 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-16 01:14:01.814636 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-16 01:14:01.814642 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-16 01:14:01.814648 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.814654 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-16 01:14:01.814659 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-16 01:14:01.814665 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.814671 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-16 01:14:01.814676 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.814682 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-16 01:14:01.814687 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-16 01:14:01.814692 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-16 01:14:01.814697 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-16 01:14:01.814702 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-16 01:14:01.814707 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-16 01:14:01.814712 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-16 01:14:01.814717 | orchestrator | 2026-03-16 01:14:01.814723 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-16 01:14:01.814728 | orchestrator | Monday 16 March 2026 01:10:48 +0000 (0:00:06.781) 0:05:30.557 ********** 2026-03-16 01:14:01.814733 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.814738 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.814743 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.814748 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.814753 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.814762 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.814774 | orchestrator | 2026-03-16 01:14:01.814784 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-16 01:14:01.814792 | orchestrator | Monday 16 March 2026 01:10:49 +0000 (0:00:00.783) 0:05:31.340 ********** 2026-03-16 01:14:01.814801 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.814809 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.814818 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.814827 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.814836 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.814850 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.814857 | orchestrator | 2026-03-16 01:14:01.814862 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-16 01:14:01.814867 | orchestrator | Monday 16 March 2026 01:10:49 +0000 (0:00:00.602) 0:05:31.943 ********** 2026-03-16 01:14:01.814872 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.814877 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.814882 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.814887 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.814892 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.814897 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.814902 | orchestrator | 2026-03-16 01:14:01.814907 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-16 01:14:01.814912 | orchestrator | Monday 16 March 2026 01:10:51 +0000 (0:00:01.918) 0:05:33.861 ********** 2026-03-16 01:14:01.814924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.814933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.814939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.814945 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.814950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.814960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.814969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.814975 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.814980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-16 01:14:01.814988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.814994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-16 01:14:01.814999 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.815004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-16 01:14:01.815014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.815019 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.815028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-16 01:14:01.815033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.815041 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.815054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-16 01:14:01.815063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-16 01:14:01.815071 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.815085 | orchestrator | 2026-03-16 01:14:01.815094 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-16 01:14:01.815103 | orchestrator | Monday 16 March 2026 01:10:53 +0000 (0:00:01.344) 0:05:35.205 ********** 2026-03-16 01:14:01.815112 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-16 01:14:01.815121 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-16 01:14:01.815130 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.815137 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-16 01:14:01.815145 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-16 01:14:01.815154 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.815163 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-16 01:14:01.815172 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-16 01:14:01.815181 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.815205 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-16 01:14:01.815215 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-16 01:14:01.815222 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.815227 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-16 01:14:01.815232 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-16 01:14:01.815238 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.815242 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-16 01:14:01.815248 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-16 01:14:01.815253 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.815258 | orchestrator | 2026-03-16 01:14:01.815263 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-16 01:14:01.815268 | orchestrator | Monday 16 March 2026 01:10:53 +0000 (0:00:00.825) 0:05:36.031 ********** 2026-03-16 01:14:01.815279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815288 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815348 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815391 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:01.815430 | orchestrator | 2026-03-16 01:14:01.815435 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-16 01:14:01.815440 | orchestrator | Monday 16 March 2026 01:10:56 +0000 (0:00:02.565) 0:05:38.597 ********** 2026-03-16 01:14:01.815446 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.815451 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.815458 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.815471 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.815481 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.815489 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.815498 | orchestrator | 2026-03-16 01:14:01.815507 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-16 01:14:01.815516 | orchestrator | Monday 16 March 2026 01:10:57 +0000 (0:00:00.735) 0:05:39.333 ********** 2026-03-16 01:14:01.815523 | orchestrator | 2026-03-16 01:14:01.815529 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-16 01:14:01.815534 | orchestrator | Monday 16 March 2026 01:10:57 +0000 (0:00:00.135) 0:05:39.469 ********** 2026-03-16 01:14:01.815542 | orchestrator | 2026-03-16 01:14:01.815550 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-16 01:14:01.815558 | orchestrator | Monday 16 March 2026 01:10:57 +0000 (0:00:00.134) 0:05:39.604 ********** 2026-03-16 01:14:01.815566 | orchestrator | 2026-03-16 01:14:01.815574 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-16 01:14:01.815582 | orchestrator | Monday 16 March 2026 01:10:57 +0000 (0:00:00.131) 0:05:39.735 ********** 2026-03-16 01:14:01.815590 | orchestrator | 2026-03-16 01:14:01.815598 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-16 01:14:01.815607 | orchestrator | Monday 16 March 2026 01:10:57 +0000 (0:00:00.149) 0:05:39.885 ********** 2026-03-16 01:14:01.815615 | orchestrator | 2026-03-16 01:14:01.815623 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-16 01:14:01.815632 | orchestrator | Monday 16 March 2026 01:10:57 +0000 (0:00:00.151) 0:05:40.036 ********** 2026-03-16 01:14:01.815641 | orchestrator | 2026-03-16 01:14:01.815650 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-16 01:14:01.815659 | orchestrator | Monday 16 March 2026 01:10:58 +0000 (0:00:00.306) 0:05:40.342 ********** 2026-03-16 01:14:01.815667 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.815676 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:01.815685 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:01.815693 | orchestrator | 2026-03-16 01:14:01.815698 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-16 01:14:01.815703 | orchestrator | Monday 16 March 2026 01:11:04 +0000 (0:00:06.467) 0:05:46.810 ********** 2026-03-16 01:14:01.815708 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.815713 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:01.815718 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:01.815723 | orchestrator | 2026-03-16 01:14:01.815728 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-16 01:14:01.815733 | orchestrator | Monday 16 March 2026 01:11:20 +0000 (0:00:15.963) 0:06:02.774 ********** 2026-03-16 01:14:01.815739 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.815749 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.815759 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.815764 | orchestrator | 2026-03-16 01:14:01.815770 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-16 01:14:01.815775 | orchestrator | Monday 16 March 2026 01:11:40 +0000 (0:00:19.875) 0:06:22.650 ********** 2026-03-16 01:14:01.815780 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.815785 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.815790 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.815795 | orchestrator | 2026-03-16 01:14:01.815800 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-16 01:14:01.815805 | orchestrator | Monday 16 March 2026 01:12:13 +0000 (0:00:32.759) 0:06:55.410 ********** 2026-03-16 01:14:01.815810 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-16 01:14:01.815816 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-16 01:14:01.815825 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-16 01:14:01.815834 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.815843 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.815851 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.815859 | orchestrator | 2026-03-16 01:14:01.815868 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-16 01:14:01.815876 | orchestrator | Monday 16 March 2026 01:12:19 +0000 (0:00:06.368) 0:07:01.778 ********** 2026-03-16 01:14:01.815884 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.815893 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.815901 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.815909 | orchestrator | 2026-03-16 01:14:01.815926 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-16 01:14:01.815935 | orchestrator | Monday 16 March 2026 01:12:20 +0000 (0:00:00.894) 0:07:02.673 ********** 2026-03-16 01:14:01.815943 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:14:01.815951 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:14:01.815960 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:14:01.815968 | orchestrator | 2026-03-16 01:14:01.815977 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-16 01:14:01.815986 | orchestrator | Monday 16 March 2026 01:12:48 +0000 (0:00:27.541) 0:07:30.214 ********** 2026-03-16 01:14:01.815994 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.816003 | orchestrator | 2026-03-16 01:14:01.816011 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-16 01:14:01.816020 | orchestrator | Monday 16 March 2026 01:12:48 +0000 (0:00:00.125) 0:07:30.340 ********** 2026-03-16 01:14:01.816029 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.816037 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.816045 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.816053 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.816060 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.816068 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-16 01:14:01.816077 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-16 01:14:01.816084 | orchestrator | 2026-03-16 01:14:01.816091 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-16 01:14:01.816099 | orchestrator | Monday 16 March 2026 01:13:08 +0000 (0:00:20.491) 0:07:50.831 ********** 2026-03-16 01:14:01.816107 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.816116 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.816123 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.816131 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.816139 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.816148 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.816157 | orchestrator | 2026-03-16 01:14:01.816173 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-16 01:14:01.816181 | orchestrator | Monday 16 March 2026 01:13:17 +0000 (0:00:08.478) 0:07:59.310 ********** 2026-03-16 01:14:01.816206 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.816215 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.816223 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.816231 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.816239 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.816247 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-16 01:14:01.816256 | orchestrator | 2026-03-16 01:14:01.816264 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-16 01:14:01.816274 | orchestrator | Monday 16 March 2026 01:13:20 +0000 (0:00:03.464) 0:08:02.774 ********** 2026-03-16 01:14:01.816283 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-16 01:14:01.816292 | orchestrator | 2026-03-16 01:14:01.816301 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-16 01:14:01.816309 | orchestrator | Monday 16 March 2026 01:13:36 +0000 (0:00:15.516) 0:08:18.290 ********** 2026-03-16 01:14:01.816316 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-16 01:14:01.816323 | orchestrator | 2026-03-16 01:14:01.816330 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-16 01:14:01.816337 | orchestrator | Monday 16 March 2026 01:13:37 +0000 (0:00:01.254) 0:08:19.545 ********** 2026-03-16 01:14:01.816344 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.816352 | orchestrator | 2026-03-16 01:14:01.816360 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-16 01:14:01.816368 | orchestrator | Monday 16 March 2026 01:13:38 +0000 (0:00:01.411) 0:08:20.957 ********** 2026-03-16 01:14:01.816376 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-16 01:14:01.816383 | orchestrator | 2026-03-16 01:14:01.816392 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-16 01:14:01.816408 | orchestrator | Monday 16 March 2026 01:13:52 +0000 (0:00:13.654) 0:08:34.611 ********** 2026-03-16 01:14:01.816418 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:14:01.816426 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:14:01.816434 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:14:01.816442 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:01.816451 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:14:01.816459 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:14:01.816468 | orchestrator | 2026-03-16 01:14:01.816476 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-16 01:14:01.816483 | orchestrator | 2026-03-16 01:14:01.816491 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-16 01:14:01.816498 | orchestrator | Monday 16 March 2026 01:13:54 +0000 (0:00:01.919) 0:08:36.531 ********** 2026-03-16 01:14:01.816506 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:01.816513 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:01.816521 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:01.816528 | orchestrator | 2026-03-16 01:14:01.816536 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-16 01:14:01.816544 | orchestrator | 2026-03-16 01:14:01.816552 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-16 01:14:01.816559 | orchestrator | Monday 16 March 2026 01:13:55 +0000 (0:00:01.193) 0:08:37.724 ********** 2026-03-16 01:14:01.816567 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.816575 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.816583 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.816591 | orchestrator | 2026-03-16 01:14:01.816599 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-16 01:14:01.816607 | orchestrator | 2026-03-16 01:14:01.816614 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-16 01:14:01.816630 | orchestrator | Monday 16 March 2026 01:13:56 +0000 (0:00:00.615) 0:08:38.339 ********** 2026-03-16 01:14:01.816639 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-16 01:14:01.816652 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-16 01:14:01.816660 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-16 01:14:01.816668 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-16 01:14:01.816676 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-16 01:14:01.816684 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-16 01:14:01.816692 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:14:01.816700 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-16 01:14:01.816708 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-16 01:14:01.816716 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-16 01:14:01.816723 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-16 01:14:01.816731 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-16 01:14:01.816739 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-16 01:14:01.816747 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:14:01.816756 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-16 01:14:01.816764 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-16 01:14:01.816772 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-16 01:14:01.816781 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-16 01:14:01.816789 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-16 01:14:01.816797 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-16 01:14:01.816805 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:14:01.816815 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-16 01:14:01.816823 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-16 01:14:01.816831 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-16 01:14:01.816840 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-16 01:14:01.816848 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-16 01:14:01.816855 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-16 01:14:01.816864 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.816872 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-16 01:14:01.816880 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-16 01:14:01.816888 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-16 01:14:01.816896 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-16 01:14:01.816905 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-16 01:14:01.816914 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-16 01:14:01.816923 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.816932 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-16 01:14:01.816941 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-16 01:14:01.816950 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-16 01:14:01.816959 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-16 01:14:01.816968 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-16 01:14:01.816976 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-16 01:14:01.816985 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.816995 | orchestrator | 2026-03-16 01:14:01.817004 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-16 01:14:01.817022 | orchestrator | 2026-03-16 01:14:01.817031 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-16 01:14:01.817040 | orchestrator | Monday 16 March 2026 01:13:57 +0000 (0:00:01.405) 0:08:39.745 ********** 2026-03-16 01:14:01.817049 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-16 01:14:01.817067 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-16 01:14:01.817078 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.817087 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-16 01:14:01.817095 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-16 01:14:01.817104 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.817112 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-16 01:14:01.817120 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-16 01:14:01.817129 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.817136 | orchestrator | 2026-03-16 01:14:01.817146 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-16 01:14:01.817155 | orchestrator | 2026-03-16 01:14:01.817164 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-16 01:14:01.817173 | orchestrator | Monday 16 March 2026 01:13:58 +0000 (0:00:00.752) 0:08:40.498 ********** 2026-03-16 01:14:01.817181 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.817236 | orchestrator | 2026-03-16 01:14:01.817246 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-16 01:14:01.817254 | orchestrator | 2026-03-16 01:14:01.817263 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-16 01:14:01.817271 | orchestrator | Monday 16 March 2026 01:13:59 +0000 (0:00:00.689) 0:08:41.188 ********** 2026-03-16 01:14:01.817280 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:01.817289 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:01.817297 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:01.817305 | orchestrator | 2026-03-16 01:14:01.817313 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:14:01.817328 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:14:01.817338 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-16 01:14:01.817347 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-16 01:14:01.817355 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-16 01:14:01.817364 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-16 01:14:01.817372 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-16 01:14:01.817381 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-16 01:14:01.817389 | orchestrator | 2026-03-16 01:14:01.817398 | orchestrator | 2026-03-16 01:14:01.817407 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:14:01.817416 | orchestrator | Monday 16 March 2026 01:13:59 +0000 (0:00:00.423) 0:08:41.612 ********** 2026-03-16 01:14:01.817424 | orchestrator | =============================================================================== 2026-03-16 01:14:01.817430 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 32.76s 2026-03-16 01:14:01.817436 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.50s 2026-03-16 01:14:01.817447 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.54s 2026-03-16 01:14:01.817452 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 21.27s 2026-03-16 01:14:01.817460 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.51s 2026-03-16 01:14:01.817469 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.49s 2026-03-16 01:14:01.817477 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.88s 2026-03-16 01:14:01.817485 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.98s 2026-03-16 01:14:01.817492 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.96s 2026-03-16 01:14:01.817500 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.93s 2026-03-16 01:14:01.817508 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.52s 2026-03-16 01:14:01.817517 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.00s 2026-03-16 01:14:01.817525 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.91s 2026-03-16 01:14:01.817533 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.85s 2026-03-16 01:14:01.817541 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.65s 2026-03-16 01:14:01.817549 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.76s 2026-03-16 01:14:01.817557 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.48s 2026-03-16 01:14:01.817566 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.78s 2026-03-16 01:14:01.817574 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.65s 2026-03-16 01:14:01.817582 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 6.82s 2026-03-16 01:14:04.855235 | orchestrator | 2026-03-16 01:14:04 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:14:04.855854 | orchestrator | 2026-03-16 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:14:07.900304 | orchestrator | 2026-03-16 01:14:07 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:14:07.900673 | orchestrator | 2026-03-16 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:14:10.950311 | orchestrator | 2026-03-16 01:14:10 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:14:10.950390 | orchestrator | 2026-03-16 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:14:14.000488 | orchestrator | 2026-03-16 01:14:14 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state STARTED 2026-03-16 01:14:14.000545 | orchestrator | 2026-03-16 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-03-16 01:14:17.046783 | orchestrator | 2026-03-16 01:14:17 | INFO  | Task fa33a1d4-62c0-4004-98f1-aa42ff74a5c1 is in state SUCCESS 2026-03-16 01:14:17.047797 | orchestrator | 2026-03-16 01:14:17.047842 | orchestrator | 2026-03-16 01:14:17.047851 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:14:17.047858 | orchestrator | 2026-03-16 01:14:17.047865 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:14:17.047872 | orchestrator | Monday 16 March 2026 01:09:38 +0000 (0:00:00.273) 0:00:00.273 ********** 2026-03-16 01:14:17.047889 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:17.047896 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:14:17.047902 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:14:17.047948 | orchestrator | 2026-03-16 01:14:17.047959 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:14:17.048002 | orchestrator | Monday 16 March 2026 01:09:38 +0000 (0:00:00.292) 0:00:00.566 ********** 2026-03-16 01:14:17.048009 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-16 01:14:17.048030 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-16 01:14:17.048036 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-16 01:14:17.048042 | orchestrator | 2026-03-16 01:14:17.048049 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-16 01:14:17.048055 | orchestrator | 2026-03-16 01:14:17.048061 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-16 01:14:17.048068 | orchestrator | Monday 16 March 2026 01:09:39 +0000 (0:00:00.453) 0:00:01.019 ********** 2026-03-16 01:14:17.048074 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:14:17.048082 | orchestrator | 2026-03-16 01:14:17.048088 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-16 01:14:17.048094 | orchestrator | Monday 16 March 2026 01:09:40 +0000 (0:00:00.558) 0:00:01.578 ********** 2026-03-16 01:14:17.048101 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-16 01:14:17.048107 | orchestrator | 2026-03-16 01:14:17.048113 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-16 01:14:17.048119 | orchestrator | Monday 16 March 2026 01:09:43 +0000 (0:00:03.895) 0:00:05.474 ********** 2026-03-16 01:14:17.048125 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-16 01:14:17.048132 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-16 01:14:17.048138 | orchestrator | 2026-03-16 01:14:17.048144 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-16 01:14:17.048150 | orchestrator | Monday 16 March 2026 01:09:51 +0000 (0:00:07.241) 0:00:12.715 ********** 2026-03-16 01:14:17.048203 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-16 01:14:17.048211 | orchestrator | 2026-03-16 01:14:17.048217 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-16 01:14:17.048223 | orchestrator | Monday 16 March 2026 01:09:54 +0000 (0:00:03.491) 0:00:16.207 ********** 2026-03-16 01:14:17.048229 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-16 01:14:17.048235 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-16 01:14:17.048242 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-16 01:14:17.048248 | orchestrator | 2026-03-16 01:14:17.048254 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-16 01:14:17.048260 | orchestrator | Monday 16 March 2026 01:10:03 +0000 (0:00:08.472) 0:00:24.680 ********** 2026-03-16 01:14:17.048266 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-16 01:14:17.048273 | orchestrator | 2026-03-16 01:14:17.048280 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-16 01:14:17.048286 | orchestrator | Monday 16 March 2026 01:10:07 +0000 (0:00:03.903) 0:00:28.584 ********** 2026-03-16 01:14:17.048292 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-16 01:14:17.048298 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-16 01:14:17.048304 | orchestrator | 2026-03-16 01:14:17.048311 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-16 01:14:17.048317 | orchestrator | Monday 16 March 2026 01:10:15 +0000 (0:00:08.390) 0:00:36.974 ********** 2026-03-16 01:14:17.048323 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-16 01:14:17.048329 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-16 01:14:17.048335 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-16 01:14:17.048341 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-16 01:14:17.048347 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-16 01:14:17.048353 | orchestrator | 2026-03-16 01:14:17.048359 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-16 01:14:17.048451 | orchestrator | Monday 16 March 2026 01:10:33 +0000 (0:00:17.992) 0:00:54.966 ********** 2026-03-16 01:14:17.048456 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:14:17.048460 | orchestrator | 2026-03-16 01:14:17.048465 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-16 01:14:17.048469 | orchestrator | Monday 16 March 2026 01:10:34 +0000 (0:00:00.903) 0:00:55.870 ********** 2026-03-16 01:14:17.048474 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.048478 | orchestrator | 2026-03-16 01:14:17.048483 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-16 01:14:17.048487 | orchestrator | Monday 16 March 2026 01:10:39 +0000 (0:00:05.010) 0:01:00.881 ********** 2026-03-16 01:14:17.048491 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.048496 | orchestrator | 2026-03-16 01:14:17.048500 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-16 01:14:17.048514 | orchestrator | Monday 16 March 2026 01:10:43 +0000 (0:00:03.940) 0:01:04.822 ********** 2026-03-16 01:14:17.048519 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:17.048523 | orchestrator | 2026-03-16 01:14:17.048528 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-16 01:14:17.048532 | orchestrator | Monday 16 March 2026 01:10:46 +0000 (0:00:03.118) 0:01:07.941 ********** 2026-03-16 01:14:17.048558 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-16 01:14:17.048567 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-16 01:14:17.048572 | orchestrator | 2026-03-16 01:14:17.048576 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-16 01:14:17.048580 | orchestrator | Monday 16 March 2026 01:10:55 +0000 (0:00:09.295) 0:01:17.236 ********** 2026-03-16 01:14:17.048585 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-16 01:14:17.048589 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-16 01:14:17.048595 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-16 01:14:17.048600 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-16 01:14:17.048604 | orchestrator | 2026-03-16 01:14:17.048609 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-16 01:14:17.048613 | orchestrator | Monday 16 March 2026 01:11:09 +0000 (0:00:14.334) 0:01:31.570 ********** 2026-03-16 01:14:17.048618 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.048622 | orchestrator | 2026-03-16 01:14:17.048627 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-16 01:14:17.048631 | orchestrator | Monday 16 March 2026 01:11:13 +0000 (0:00:03.724) 0:01:35.294 ********** 2026-03-16 01:14:17.048635 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.048640 | orchestrator | 2026-03-16 01:14:17.048644 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-16 01:14:17.048648 | orchestrator | Monday 16 March 2026 01:11:18 +0000 (0:00:04.871) 0:01:40.166 ********** 2026-03-16 01:14:17.048652 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:17.048656 | orchestrator | 2026-03-16 01:14:17.048660 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-16 01:14:17.048664 | orchestrator | Monday 16 March 2026 01:11:18 +0000 (0:00:00.223) 0:01:40.390 ********** 2026-03-16 01:14:17.048667 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:17.048671 | orchestrator | 2026-03-16 01:14:17.048675 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-16 01:14:17.048679 | orchestrator | Monday 16 March 2026 01:11:23 +0000 (0:00:05.082) 0:01:45.472 ********** 2026-03-16 01:14:17.048686 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-03-16 01:14:17.048690 | orchestrator | 2026-03-16 01:14:17.048694 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-16 01:14:17.048697 | orchestrator | Monday 16 March 2026 01:11:24 +0000 (0:00:01.061) 0:01:46.534 ********** 2026-03-16 01:14:17.048701 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.048705 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.048709 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.048713 | orchestrator | 2026-03-16 01:14:17.048716 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-16 01:14:17.049003 | orchestrator | Monday 16 March 2026 01:11:30 +0000 (0:00:05.395) 0:01:51.930 ********** 2026-03-16 01:14:17.049008 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.049012 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.049016 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.049019 | orchestrator | 2026-03-16 01:14:17.049023 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-16 01:14:17.049027 | orchestrator | Monday 16 March 2026 01:11:34 +0000 (0:00:03.996) 0:01:55.927 ********** 2026-03-16 01:14:17.049031 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.049034 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.049038 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.049042 | orchestrator | 2026-03-16 01:14:17.049046 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-16 01:14:17.049049 | orchestrator | Monday 16 March 2026 01:11:35 +0000 (0:00:00.742) 0:01:56.670 ********** 2026-03-16 01:14:17.049053 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:14:17.049057 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:17.049060 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:14:17.049064 | orchestrator | 2026-03-16 01:14:17.049068 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-16 01:14:17.049072 | orchestrator | Monday 16 March 2026 01:11:36 +0000 (0:00:01.667) 0:01:58.337 ********** 2026-03-16 01:14:17.049075 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.049079 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.049083 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.049086 | orchestrator | 2026-03-16 01:14:17.049090 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-16 01:14:17.049094 | orchestrator | Monday 16 March 2026 01:11:38 +0000 (0:00:01.268) 0:01:59.606 ********** 2026-03-16 01:14:17.049097 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.049101 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.049105 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.049108 | orchestrator | 2026-03-16 01:14:17.049112 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-16 01:14:17.049116 | orchestrator | Monday 16 March 2026 01:11:39 +0000 (0:00:01.249) 0:02:00.855 ********** 2026-03-16 01:14:17.049120 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.049123 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.049127 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.049131 | orchestrator | 2026-03-16 01:14:17.049151 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-16 01:14:17.049191 | orchestrator | Monday 16 March 2026 01:11:41 +0000 (0:00:02.167) 0:02:03.022 ********** 2026-03-16 01:14:17.049198 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.049202 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.049205 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.049209 | orchestrator | 2026-03-16 01:14:17.049217 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-16 01:14:17.049221 | orchestrator | Monday 16 March 2026 01:11:43 +0000 (0:00:01.876) 0:02:04.899 ********** 2026-03-16 01:14:17.049225 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:17.049234 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:14:17.049238 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:14:17.049242 | orchestrator | 2026-03-16 01:14:17.049245 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-16 01:14:17.049249 | orchestrator | Monday 16 March 2026 01:11:44 +0000 (0:00:00.699) 0:02:05.598 ********** 2026-03-16 01:14:17.049253 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:17.049257 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:14:17.049261 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:14:17.049264 | orchestrator | 2026-03-16 01:14:17.049268 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-16 01:14:17.049272 | orchestrator | Monday 16 March 2026 01:11:48 +0000 (0:00:04.015) 0:02:09.613 ********** 2026-03-16 01:14:17.049276 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:14:17.049279 | orchestrator | 2026-03-16 01:14:17.049283 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-16 01:14:17.049287 | orchestrator | Monday 16 March 2026 01:11:48 +0000 (0:00:00.893) 0:02:10.507 ********** 2026-03-16 01:14:17.049291 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:17.049295 | orchestrator | 2026-03-16 01:14:17.049298 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-16 01:14:17.049302 | orchestrator | Monday 16 March 2026 01:11:52 +0000 (0:00:03.165) 0:02:13.672 ********** 2026-03-16 01:14:17.049306 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:17.049309 | orchestrator | 2026-03-16 01:14:17.049313 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-16 01:14:17.049327 | orchestrator | Monday 16 March 2026 01:11:55 +0000 (0:00:03.030) 0:02:16.703 ********** 2026-03-16 01:14:17.049331 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-16 01:14:17.049335 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-16 01:14:17.049339 | orchestrator | 2026-03-16 01:14:17.049343 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-16 01:14:17.049347 | orchestrator | Monday 16 March 2026 01:12:02 +0000 (0:00:07.820) 0:02:24.523 ********** 2026-03-16 01:14:17.049351 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:17.049354 | orchestrator | 2026-03-16 01:14:17.049358 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-16 01:14:17.049363 | orchestrator | Monday 16 March 2026 01:12:06 +0000 (0:00:03.100) 0:02:27.624 ********** 2026-03-16 01:14:17.049370 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:14:17.049375 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:14:17.049382 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:14:17.049388 | orchestrator | 2026-03-16 01:14:17.049394 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-16 01:14:17.049401 | orchestrator | Monday 16 March 2026 01:12:06 +0000 (0:00:00.321) 0:02:27.946 ********** 2026-03-16 01:14:17.049408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.049432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.049443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.049448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.049453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.049457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.049461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049529 | orchestrator | 2026-03-16 01:14:17.049533 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-16 01:14:17.049537 | orchestrator | Monday 16 March 2026 01:12:08 +0000 (0:00:02.558) 0:02:30.504 ********** 2026-03-16 01:14:17.049541 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:17.049545 | orchestrator | 2026-03-16 01:14:17.049551 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-16 01:14:17.049554 | orchestrator | Monday 16 March 2026 01:12:09 +0000 (0:00:00.150) 0:02:30.655 ********** 2026-03-16 01:14:17.049558 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:17.049562 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:17.049566 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:17.049570 | orchestrator | 2026-03-16 01:14:17.049574 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-16 01:14:17.049577 | orchestrator | Monday 16 March 2026 01:12:09 +0000 (0:00:00.516) 0:02:31.171 ********** 2026-03-16 01:14:17.049582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 01:14:17.049586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 01:14:17.049590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.049600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.049604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:14:17.049608 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:17.049627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 01:14:17.049632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 01:14:17.049637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.049641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.049649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:14:17.049653 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:17.049673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 01:14:17.049680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 01:14:17.049685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.049690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.049694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:14:17.049702 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:17.049710 | orchestrator | 2026-03-16 01:14:17.049719 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-16 01:14:17.049727 | orchestrator | Monday 16 March 2026 01:12:10 +0000 (0:00:00.701) 0:02:31.872 ********** 2026-03-16 01:14:17.049734 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:14:17.049741 | orchestrator | 2026-03-16 01:14:17.049747 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-16 01:14:17.049753 | orchestrator | Monday 16 March 2026 01:12:10 +0000 (0:00:00.602) 0:02:32.474 ********** 2026-03-16 01:14:17.049761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.049793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.049803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.049816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.049823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.049828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.049833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.049889 | orchestrator | 2026-03-16 01:14:17.049895 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-16 01:14:17.049905 | orchestrator | Monday 16 March 2026 01:12:16 +0000 (0:00:05.517) 0:02:37.992 ********** 2026-03-16 01:14:17.049913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 01:14:17.049924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 01:14:17.049932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.049939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.049950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:14:17.049957 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:17.049968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 01:14:17.049978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 01:14:17.049982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.049986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.049990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:14:17.049994 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:17.050003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 01:14:17.050007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 01:14:17.050046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.050052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.050056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:14:17.050060 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:17.050064 | orchestrator | 2026-03-16 01:14:17.050068 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-16 01:14:17.050072 | orchestrator | Monday 16 March 2026 01:12:17 +0000 (0:00:00.676) 0:02:38.669 ********** 2026-03-16 01:14:17.050076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 01:14:17.050088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 01:14:17.050092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.050099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.050103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:14:17.050107 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:17.050111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 01:14:17.050115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 01:14:17.050125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.050129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.050135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:14:17.050139 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:17.050143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-16 01:14:17.050148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-16 01:14:17.050152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.050203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-16 01:14:17.050209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-16 01:14:17.050216 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:17.050220 | orchestrator | 2026-03-16 01:14:17.050224 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-16 01:14:17.050228 | orchestrator | Monday 16 March 2026 01:12:17 +0000 (0:00:00.895) 0:02:39.564 ********** 2026-03-16 01:14:17.050232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.050236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.050240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.050249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.050258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.050262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.050266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050311 | orchestrator | 2026-03-16 01:14:17.050315 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-16 01:14:17.050319 | orchestrator | Monday 16 March 2026 01:12:23 +0000 (0:00:05.586) 0:02:45.150 ********** 2026-03-16 01:14:17.050323 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-16 01:14:17.050327 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-16 01:14:17.050331 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-16 01:14:17.050335 | orchestrator | 2026-03-16 01:14:17.050339 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-16 01:14:17.050343 | orchestrator | Monday 16 March 2026 01:12:26 +0000 (0:00:02.578) 0:02:47.729 ********** 2026-03-16 01:14:17.050371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.050380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.050384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.050388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.050392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.050396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.050408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050478 | orchestrator | 2026-03-16 01:14:17.050481 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-16 01:14:17.050485 | orchestrator | Monday 16 March 2026 01:12:43 +0000 (0:00:17.080) 0:03:04.809 ********** 2026-03-16 01:14:17.050489 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.050493 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.050497 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.050501 | orchestrator | 2026-03-16 01:14:17.050505 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-16 01:14:17.050509 | orchestrator | Monday 16 March 2026 01:12:44 +0000 (0:00:01.448) 0:03:06.258 ********** 2026-03-16 01:14:17.050513 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-16 01:14:17.050516 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-16 01:14:17.050520 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-16 01:14:17.050524 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-16 01:14:17.050528 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-16 01:14:17.050532 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-16 01:14:17.050536 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-16 01:14:17.050540 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-16 01:14:17.050544 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-16 01:14:17.050548 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-16 01:14:17.050551 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-16 01:14:17.050555 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-16 01:14:17.050559 | orchestrator | 2026-03-16 01:14:17.050564 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-16 01:14:17.050570 | orchestrator | Monday 16 March 2026 01:12:49 +0000 (0:00:05.221) 0:03:11.479 ********** 2026-03-16 01:14:17.050574 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-16 01:14:17.050578 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-16 01:14:17.050582 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-16 01:14:17.050586 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-16 01:14:17.050590 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-16 01:14:17.050593 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-16 01:14:17.050597 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-16 01:14:17.050601 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-16 01:14:17.050605 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-16 01:14:17.050609 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-16 01:14:17.050612 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-16 01:14:17.050616 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-16 01:14:17.050620 | orchestrator | 2026-03-16 01:14:17.050624 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-16 01:14:17.050628 | orchestrator | Monday 16 March 2026 01:12:55 +0000 (0:00:06.072) 0:03:17.551 ********** 2026-03-16 01:14:17.050632 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-16 01:14:17.050635 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-16 01:14:17.050639 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-16 01:14:17.050643 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-16 01:14:17.050647 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-16 01:14:17.050651 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-16 01:14:17.050655 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-16 01:14:17.050659 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-16 01:14:17.050665 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-16 01:14:17.050669 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-16 01:14:17.050673 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-16 01:14:17.050677 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-16 01:14:17.050680 | orchestrator | 2026-03-16 01:14:17.050692 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-16 01:14:17.050697 | orchestrator | Monday 16 March 2026 01:13:01 +0000 (0:00:05.426) 0:03:22.978 ********** 2026-03-16 01:14:17.050701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.050705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.050713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-16 01:14:17.050720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.050729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.050739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-16 01:14:17.050746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-16 01:14:17.050814 | orchestrator | 2026-03-16 01:14:17.050821 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-16 01:14:17.050827 | orchestrator | Monday 16 March 2026 01:13:04 +0000 (0:00:03.349) 0:03:26.328 ********** 2026-03-16 01:14:17.050833 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:14:17.050839 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:14:17.050845 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:14:17.050851 | orchestrator | 2026-03-16 01:14:17.050857 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-16 01:14:17.050863 | orchestrator | Monday 16 March 2026 01:13:05 +0000 (0:00:00.294) 0:03:26.623 ********** 2026-03-16 01:14:17.050869 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.050875 | orchestrator | 2026-03-16 01:14:17.050881 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-16 01:14:17.050887 | orchestrator | Monday 16 March 2026 01:13:07 +0000 (0:00:02.105) 0:03:28.728 ********** 2026-03-16 01:14:17.050893 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.050899 | orchestrator | 2026-03-16 01:14:17.050905 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-16 01:14:17.050911 | orchestrator | Monday 16 March 2026 01:13:09 +0000 (0:00:02.083) 0:03:30.811 ********** 2026-03-16 01:14:17.050917 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.050924 | orchestrator | 2026-03-16 01:14:17.050931 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-16 01:14:17.050937 | orchestrator | Monday 16 March 2026 01:13:11 +0000 (0:00:02.169) 0:03:32.981 ********** 2026-03-16 01:14:17.050943 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.050949 | orchestrator | 2026-03-16 01:14:17.050955 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-16 01:14:17.050961 | orchestrator | Monday 16 March 2026 01:13:14 +0000 (0:00:03.318) 0:03:36.300 ********** 2026-03-16 01:14:17.050968 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.050973 | orchestrator | 2026-03-16 01:14:17.050976 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-16 01:14:17.050980 | orchestrator | Monday 16 March 2026 01:13:37 +0000 (0:00:22.896) 0:03:59.196 ********** 2026-03-16 01:14:17.050984 | orchestrator | 2026-03-16 01:14:17.050988 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-16 01:14:17.050992 | orchestrator | Monday 16 March 2026 01:13:37 +0000 (0:00:00.070) 0:03:59.266 ********** 2026-03-16 01:14:17.050995 | orchestrator | 2026-03-16 01:14:17.050999 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-16 01:14:17.051003 | orchestrator | Monday 16 March 2026 01:13:37 +0000 (0:00:00.064) 0:03:59.331 ********** 2026-03-16 01:14:17.051007 | orchestrator | 2026-03-16 01:14:17.051011 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-16 01:14:17.051019 | orchestrator | Monday 16 March 2026 01:13:37 +0000 (0:00:00.070) 0:03:59.402 ********** 2026-03-16 01:14:17.051027 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.051031 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.051036 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.051039 | orchestrator | 2026-03-16 01:14:17.051043 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-16 01:14:17.051049 | orchestrator | Monday 16 March 2026 01:13:47 +0000 (0:00:09.561) 0:04:08.963 ********** 2026-03-16 01:14:17.051054 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.051058 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.051061 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.051065 | orchestrator | 2026-03-16 01:14:17.051069 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-16 01:14:17.051073 | orchestrator | Monday 16 March 2026 01:13:54 +0000 (0:00:06.661) 0:04:15.624 ********** 2026-03-16 01:14:17.051077 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.051080 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.051084 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.051088 | orchestrator | 2026-03-16 01:14:17.051094 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-16 01:14:17.051100 | orchestrator | Monday 16 March 2026 01:14:00 +0000 (0:00:06.249) 0:04:21.874 ********** 2026-03-16 01:14:17.051107 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.051116 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.051126 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.051131 | orchestrator | 2026-03-16 01:14:17.051136 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-16 01:14:17.051142 | orchestrator | Monday 16 March 2026 01:14:05 +0000 (0:00:05.017) 0:04:26.892 ********** 2026-03-16 01:14:17.051147 | orchestrator | changed: [testbed-node-2] 2026-03-16 01:14:17.051153 | orchestrator | changed: [testbed-node-1] 2026-03-16 01:14:17.051178 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:14:17.051185 | orchestrator | 2026-03-16 01:14:17.051192 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:14:17.051198 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-16 01:14:17.051205 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-16 01:14:17.051211 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-16 01:14:17.051217 | orchestrator | 2026-03-16 01:14:17.051222 | orchestrator | 2026-03-16 01:14:17.051228 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:14:17.051234 | orchestrator | Monday 16 March 2026 01:14:13 +0000 (0:00:08.468) 0:04:35.361 ********** 2026-03-16 01:14:17.051240 | orchestrator | =============================================================================== 2026-03-16 01:14:17.051246 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.90s 2026-03-16 01:14:17.051253 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.99s 2026-03-16 01:14:17.051271 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.08s 2026-03-16 01:14:17.051280 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.33s 2026-03-16 01:14:17.051284 | orchestrator | octavia : Restart octavia-api container --------------------------------- 9.56s 2026-03-16 01:14:17.051288 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.30s 2026-03-16 01:14:17.051295 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.47s 2026-03-16 01:14:17.051301 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.47s 2026-03-16 01:14:17.051309 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.39s 2026-03-16 01:14:17.051323 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.82s 2026-03-16 01:14:17.051329 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.24s 2026-03-16 01:14:17.051335 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.66s 2026-03-16 01:14:17.051341 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 6.25s 2026-03-16 01:14:17.051347 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.07s 2026-03-16 01:14:17.051353 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.59s 2026-03-16 01:14:17.051359 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.52s 2026-03-16 01:14:17.051365 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.43s 2026-03-16 01:14:17.051371 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.40s 2026-03-16 01:14:17.051378 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.22s 2026-03-16 01:14:17.051385 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.08s 2026-03-16 01:14:17.051391 | orchestrator | 2026-03-16 01:14:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:20.092107 | orchestrator | 2026-03-16 01:14:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:23.137845 | orchestrator | 2026-03-16 01:14:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:26.183291 | orchestrator | 2026-03-16 01:14:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:29.228532 | orchestrator | 2026-03-16 01:14:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:32.269102 | orchestrator | 2026-03-16 01:14:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:35.312676 | orchestrator | 2026-03-16 01:14:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:38.357498 | orchestrator | 2026-03-16 01:14:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:41.398910 | orchestrator | 2026-03-16 01:14:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:44.447470 | orchestrator | 2026-03-16 01:14:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:47.489376 | orchestrator | 2026-03-16 01:14:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:50.529228 | orchestrator | 2026-03-16 01:14:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:53.573367 | orchestrator | 2026-03-16 01:14:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:56.617872 | orchestrator | 2026-03-16 01:14:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:14:59.662770 | orchestrator | 2026-03-16 01:14:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:15:02.701957 | orchestrator | 2026-03-16 01:15:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:15:05.745853 | orchestrator | 2026-03-16 01:15:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:15:08.787274 | orchestrator | 2026-03-16 01:15:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:15:11.834759 | orchestrator | 2026-03-16 01:15:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:15:14.880711 | orchestrator | 2026-03-16 01:15:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-16 01:15:17.921600 | orchestrator | 2026-03-16 01:17:18.405137 | orchestrator | 2026-03-16 01:17:18.408141 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Mar 16 01:17:18 UTC 2026 2026-03-16 01:17:18.408253 | orchestrator | 2026-03-16 01:17:18.812562 | orchestrator | ok: Runtime: 0:36:25.618441 2026-03-16 01:17:19.077916 | 2026-03-16 01:17:19.078059 | TASK [Bootstrap services] 2026-03-16 01:17:19.919266 | orchestrator | 2026-03-16 01:17:19.919384 | orchestrator | # BOOTSTRAP 2026-03-16 01:17:19.919396 | orchestrator | 2026-03-16 01:17:19.919404 | orchestrator | + set -e 2026-03-16 01:17:19.919411 | orchestrator | + echo 2026-03-16 01:17:19.919419 | orchestrator | + echo '# BOOTSTRAP' 2026-03-16 01:17:19.919430 | orchestrator | + echo 2026-03-16 01:17:19.919459 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-16 01:17:19.928686 | orchestrator | + set -e 2026-03-16 01:17:19.928741 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-16 01:17:24.624711 | orchestrator | 2026-03-16 01:17:24 | INFO  | It takes a moment until task 534b20a7-0e4d-4f5f-897d-50961eccefeb (flavor-manager) has been started and output is visible here. 2026-03-16 01:17:33.044121 | orchestrator | 2026-03-16 01:17:27 | INFO  | Flavor SCS-1L-1 created 2026-03-16 01:17:33.044264 | orchestrator | 2026-03-16 01:17:27 | INFO  | Flavor SCS-1L-1-5 created 2026-03-16 01:17:33.044284 | orchestrator | 2026-03-16 01:17:28 | INFO  | Flavor SCS-1V-2 created 2026-03-16 01:17:33.044295 | orchestrator | 2026-03-16 01:17:28 | INFO  | Flavor SCS-1V-2-5 created 2026-03-16 01:17:33.044305 | orchestrator | 2026-03-16 01:17:28 | INFO  | Flavor SCS-1V-4 created 2026-03-16 01:17:33.044314 | orchestrator | 2026-03-16 01:17:28 | INFO  | Flavor SCS-1V-4-10 created 2026-03-16 01:17:33.044323 | orchestrator | 2026-03-16 01:17:28 | INFO  | Flavor SCS-1V-8 created 2026-03-16 01:17:33.044334 | orchestrator | 2026-03-16 01:17:29 | INFO  | Flavor SCS-1V-8-20 created 2026-03-16 01:17:33.044355 | orchestrator | 2026-03-16 01:17:29 | INFO  | Flavor SCS-2V-4 created 2026-03-16 01:17:33.044365 | orchestrator | 2026-03-16 01:17:29 | INFO  | Flavor SCS-2V-4-10 created 2026-03-16 01:17:33.044374 | orchestrator | 2026-03-16 01:17:29 | INFO  | Flavor SCS-2V-8 created 2026-03-16 01:17:33.044383 | orchestrator | 2026-03-16 01:17:29 | INFO  | Flavor SCS-2V-8-20 created 2026-03-16 01:17:33.044394 | orchestrator | 2026-03-16 01:17:30 | INFO  | Flavor SCS-2V-16 created 2026-03-16 01:17:33.044409 | orchestrator | 2026-03-16 01:17:30 | INFO  | Flavor SCS-2V-16-50 created 2026-03-16 01:17:33.044429 | orchestrator | 2026-03-16 01:17:30 | INFO  | Flavor SCS-4V-8 created 2026-03-16 01:17:33.044450 | orchestrator | 2026-03-16 01:17:30 | INFO  | Flavor SCS-4V-8-20 created 2026-03-16 01:17:33.044463 | orchestrator | 2026-03-16 01:17:30 | INFO  | Flavor SCS-4V-16 created 2026-03-16 01:17:33.044477 | orchestrator | 2026-03-16 01:17:30 | INFO  | Flavor SCS-4V-16-50 created 2026-03-16 01:17:33.044492 | orchestrator | 2026-03-16 01:17:31 | INFO  | Flavor SCS-4V-32 created 2026-03-16 01:17:33.044506 | orchestrator | 2026-03-16 01:17:31 | INFO  | Flavor SCS-4V-32-100 created 2026-03-16 01:17:33.044517 | orchestrator | 2026-03-16 01:17:31 | INFO  | Flavor SCS-8V-16 created 2026-03-16 01:17:33.044532 | orchestrator | 2026-03-16 01:17:31 | INFO  | Flavor SCS-8V-16-50 created 2026-03-16 01:17:33.044548 | orchestrator | 2026-03-16 01:17:31 | INFO  | Flavor SCS-8V-32 created 2026-03-16 01:17:33.044561 | orchestrator | 2026-03-16 01:17:31 | INFO  | Flavor SCS-8V-32-100 created 2026-03-16 01:17:33.044574 | orchestrator | 2026-03-16 01:17:32 | INFO  | Flavor SCS-16V-32 created 2026-03-16 01:17:33.044588 | orchestrator | 2026-03-16 01:17:32 | INFO  | Flavor SCS-16V-32-100 created 2026-03-16 01:17:33.044602 | orchestrator | 2026-03-16 01:17:32 | INFO  | Flavor SCS-2V-4-20s created 2026-03-16 01:17:33.044615 | orchestrator | 2026-03-16 01:17:32 | INFO  | Flavor SCS-4V-8-50s created 2026-03-16 01:17:33.044630 | orchestrator | 2026-03-16 01:17:32 | INFO  | Flavor SCS-8V-32-100s created 2026-03-16 01:17:35.312015 | orchestrator | 2026-03-16 01:17:35 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-16 01:17:45.470183 | orchestrator | 2026-03-16 01:17:45 | INFO  | Task ea1922a5-c97b-42eb-8f16-2848b1bd5434 (bootstrap-basic) was prepared for execution. 2026-03-16 01:17:45.470335 | orchestrator | 2026-03-16 01:17:45 | INFO  | It takes a moment until task ea1922a5-c97b-42eb-8f16-2848b1bd5434 (bootstrap-basic) has been started and output is visible here. 2026-03-16 01:18:33.383279 | orchestrator | 2026-03-16 01:18:33.383370 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-16 01:18:33.383381 | orchestrator | 2026-03-16 01:18:33.383388 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-16 01:18:33.383396 | orchestrator | Monday 16 March 2026 01:17:49 +0000 (0:00:00.078) 0:00:00.078 ********** 2026-03-16 01:18:33.383403 | orchestrator | ok: [localhost] 2026-03-16 01:18:33.383411 | orchestrator | 2026-03-16 01:18:33.383417 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-16 01:18:33.383424 | orchestrator | Monday 16 March 2026 01:17:51 +0000 (0:00:02.012) 0:00:02.091 ********** 2026-03-16 01:18:33.383431 | orchestrator | ok: [localhost] 2026-03-16 01:18:33.383437 | orchestrator | 2026-03-16 01:18:33.383444 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-16 01:18:33.383451 | orchestrator | Monday 16 March 2026 01:18:00 +0000 (0:00:08.871) 0:00:10.962 ********** 2026-03-16 01:18:33.383458 | orchestrator | changed: [localhost] 2026-03-16 01:18:33.383465 | orchestrator | 2026-03-16 01:18:33.383472 | orchestrator | TASK [Create public network] *************************************************** 2026-03-16 01:18:33.383479 | orchestrator | Monday 16 March 2026 01:18:08 +0000 (0:00:08.268) 0:00:19.231 ********** 2026-03-16 01:18:33.383486 | orchestrator | changed: [localhost] 2026-03-16 01:18:33.383492 | orchestrator | 2026-03-16 01:18:33.383499 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-16 01:18:33.383505 | orchestrator | Monday 16 March 2026 01:18:14 +0000 (0:00:05.575) 0:00:24.807 ********** 2026-03-16 01:18:33.383516 | orchestrator | changed: [localhost] 2026-03-16 01:18:33.383522 | orchestrator | 2026-03-16 01:18:33.383529 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-16 01:18:33.383536 | orchestrator | Monday 16 March 2026 01:18:20 +0000 (0:00:06.293) 0:00:31.100 ********** 2026-03-16 01:18:33.383543 | orchestrator | changed: [localhost] 2026-03-16 01:18:33.383549 | orchestrator | 2026-03-16 01:18:33.383556 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-16 01:18:33.383563 | orchestrator | Monday 16 March 2026 01:18:25 +0000 (0:00:04.535) 0:00:35.635 ********** 2026-03-16 01:18:33.383569 | orchestrator | changed: [localhost] 2026-03-16 01:18:33.383576 | orchestrator | 2026-03-16 01:18:33.383583 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-16 01:18:33.383597 | orchestrator | Monday 16 March 2026 01:18:29 +0000 (0:00:03.977) 0:00:39.613 ********** 2026-03-16 01:18:33.383604 | orchestrator | ok: [localhost] 2026-03-16 01:18:33.383642 | orchestrator | 2026-03-16 01:18:33.383649 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:18:33.383656 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-16 01:18:33.383664 | orchestrator | 2026-03-16 01:18:33.383670 | orchestrator | 2026-03-16 01:18:33.383677 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:18:33.383684 | orchestrator | Monday 16 March 2026 01:18:33 +0000 (0:00:03.699) 0:00:43.312 ********** 2026-03-16 01:18:33.383691 | orchestrator | =============================================================================== 2026-03-16 01:18:33.383697 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.87s 2026-03-16 01:18:33.383704 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.27s 2026-03-16 01:18:33.383710 | orchestrator | Set public network to default ------------------------------------------- 6.29s 2026-03-16 01:18:33.383717 | orchestrator | Create public network --------------------------------------------------- 5.58s 2026-03-16 01:18:33.383740 | orchestrator | Create public subnet ---------------------------------------------------- 4.54s 2026-03-16 01:18:33.383747 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.98s 2026-03-16 01:18:33.383754 | orchestrator | Create manager role ----------------------------------------------------- 3.70s 2026-03-16 01:18:33.383761 | orchestrator | Gathering Facts --------------------------------------------------------- 2.01s 2026-03-16 01:18:36.021179 | orchestrator | 2026-03-16 01:18:36 | INFO  | It takes a moment until task fa52404b-49dc-4a9e-a56b-ee1bca03d670 (image-manager) has been started and output is visible here. 2026-03-16 01:19:20.357711 | orchestrator | 2026-03-16 01:18:38 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-16 01:19:20.357816 | orchestrator | 2026-03-16 01:18:39 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-16 01:19:20.357829 | orchestrator | 2026-03-16 01:18:39 | INFO  | Importing image Cirros 0.6.2 2026-03-16 01:19:20.357838 | orchestrator | 2026-03-16 01:18:39 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-16 01:19:20.357846 | orchestrator | 2026-03-16 01:18:41 | INFO  | Waiting for image to leave queued state... 2026-03-16 01:19:20.357856 | orchestrator | 2026-03-16 01:18:43 | INFO  | Waiting for import to complete... 2026-03-16 01:19:20.357863 | orchestrator | 2026-03-16 01:18:53 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-16 01:19:20.357872 | orchestrator | 2026-03-16 01:18:54 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-16 01:19:20.357879 | orchestrator | 2026-03-16 01:18:54 | INFO  | Setting internal_version = 0.6.2 2026-03-16 01:19:20.357887 | orchestrator | 2026-03-16 01:18:54 | INFO  | Setting image_original_user = cirros 2026-03-16 01:19:20.357895 | orchestrator | 2026-03-16 01:18:54 | INFO  | Adding tag os:cirros 2026-03-16 01:19:20.357902 | orchestrator | 2026-03-16 01:18:54 | INFO  | Setting property architecture: x86_64 2026-03-16 01:19:20.357910 | orchestrator | 2026-03-16 01:18:54 | INFO  | Setting property hw_disk_bus: scsi 2026-03-16 01:19:20.357917 | orchestrator | 2026-03-16 01:18:54 | INFO  | Setting property hw_rng_model: virtio 2026-03-16 01:19:20.357924 | orchestrator | 2026-03-16 01:18:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-16 01:19:20.357932 | orchestrator | 2026-03-16 01:18:55 | INFO  | Setting property hw_watchdog_action: reset 2026-03-16 01:19:20.357939 | orchestrator | 2026-03-16 01:18:55 | INFO  | Setting property hypervisor_type: qemu 2026-03-16 01:19:20.357945 | orchestrator | 2026-03-16 01:18:55 | INFO  | Setting property os_distro: cirros 2026-03-16 01:19:20.357952 | orchestrator | 2026-03-16 01:18:56 | INFO  | Setting property os_purpose: minimal 2026-03-16 01:19:20.357959 | orchestrator | 2026-03-16 01:18:56 | INFO  | Setting property replace_frequency: never 2026-03-16 01:19:20.357966 | orchestrator | 2026-03-16 01:18:56 | INFO  | Setting property uuid_validity: none 2026-03-16 01:19:20.357973 | orchestrator | 2026-03-16 01:18:57 | INFO  | Setting property provided_until: none 2026-03-16 01:19:20.357979 | orchestrator | 2026-03-16 01:18:57 | INFO  | Setting property image_description: Cirros 2026-03-16 01:19:20.357987 | orchestrator | 2026-03-16 01:18:57 | INFO  | Setting property image_name: Cirros 2026-03-16 01:19:20.357994 | orchestrator | 2026-03-16 01:18:57 | INFO  | Setting property internal_version: 0.6.2 2026-03-16 01:19:20.358002 | orchestrator | 2026-03-16 01:18:57 | INFO  | Setting property image_original_user: cirros 2026-03-16 01:19:20.358084 | orchestrator | 2026-03-16 01:18:58 | INFO  | Setting property os_version: 0.6.2 2026-03-16 01:19:20.358102 | orchestrator | 2026-03-16 01:18:58 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-16 01:19:20.358110 | orchestrator | 2026-03-16 01:18:58 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-16 01:19:20.358117 | orchestrator | 2026-03-16 01:18:58 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-16 01:19:20.358124 | orchestrator | 2026-03-16 01:18:58 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-16 01:19:20.358130 | orchestrator | 2026-03-16 01:18:58 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-16 01:19:20.358137 | orchestrator | 2026-03-16 01:18:59 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-16 01:19:20.358146 | orchestrator | 2026-03-16 01:18:59 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-16 01:19:20.358154 | orchestrator | 2026-03-16 01:18:59 | INFO  | Importing image Cirros 0.6.3 2026-03-16 01:19:20.358161 | orchestrator | 2026-03-16 01:18:59 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-16 01:19:20.358168 | orchestrator | 2026-03-16 01:19:01 | INFO  | Waiting for image to leave queued state... 2026-03-16 01:19:20.358174 | orchestrator | 2026-03-16 01:19:03 | INFO  | Waiting for import to complete... 2026-03-16 01:19:20.358199 | orchestrator | 2026-03-16 01:19:13 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-16 01:19:20.358206 | orchestrator | 2026-03-16 01:19:14 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-16 01:19:20.358213 | orchestrator | 2026-03-16 01:19:14 | INFO  | Setting internal_version = 0.6.3 2026-03-16 01:19:20.358220 | orchestrator | 2026-03-16 01:19:14 | INFO  | Setting image_original_user = cirros 2026-03-16 01:19:20.358227 | orchestrator | 2026-03-16 01:19:14 | INFO  | Adding tag os:cirros 2026-03-16 01:19:20.358234 | orchestrator | 2026-03-16 01:19:14 | INFO  | Setting property architecture: x86_64 2026-03-16 01:19:20.358241 | orchestrator | 2026-03-16 01:19:15 | INFO  | Setting property hw_disk_bus: scsi 2026-03-16 01:19:20.358248 | orchestrator | 2026-03-16 01:19:15 | INFO  | Setting property hw_rng_model: virtio 2026-03-16 01:19:20.358255 | orchestrator | 2026-03-16 01:19:15 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-16 01:19:20.358262 | orchestrator | 2026-03-16 01:19:15 | INFO  | Setting property hw_watchdog_action: reset 2026-03-16 01:19:20.358269 | orchestrator | 2026-03-16 01:19:16 | INFO  | Setting property hypervisor_type: qemu 2026-03-16 01:19:20.358279 | orchestrator | 2026-03-16 01:19:16 | INFO  | Setting property os_distro: cirros 2026-03-16 01:19:20.358286 | orchestrator | 2026-03-16 01:19:16 | INFO  | Setting property os_purpose: minimal 2026-03-16 01:19:20.358292 | orchestrator | 2026-03-16 01:19:16 | INFO  | Setting property replace_frequency: never 2026-03-16 01:19:20.358300 | orchestrator | 2026-03-16 01:19:17 | INFO  | Setting property uuid_validity: none 2026-03-16 01:19:20.358306 | orchestrator | 2026-03-16 01:19:17 | INFO  | Setting property provided_until: none 2026-03-16 01:19:20.358313 | orchestrator | 2026-03-16 01:19:17 | INFO  | Setting property image_description: Cirros 2026-03-16 01:19:20.358320 | orchestrator | 2026-03-16 01:19:17 | INFO  | Setting property image_name: Cirros 2026-03-16 01:19:20.358327 | orchestrator | 2026-03-16 01:19:18 | INFO  | Setting property internal_version: 0.6.3 2026-03-16 01:19:20.358341 | orchestrator | 2026-03-16 01:19:18 | INFO  | Setting property image_original_user: cirros 2026-03-16 01:19:20.358348 | orchestrator | 2026-03-16 01:19:18 | INFO  | Setting property os_version: 0.6.3 2026-03-16 01:19:20.358353 | orchestrator | 2026-03-16 01:19:18 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-16 01:19:20.358359 | orchestrator | 2026-03-16 01:19:19 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-16 01:19:20.358366 | orchestrator | 2026-03-16 01:19:19 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-16 01:19:20.358371 | orchestrator | 2026-03-16 01:19:19 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-16 01:19:20.358378 | orchestrator | 2026-03-16 01:19:19 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-16 01:19:20.717218 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-16 01:19:23.562465 | orchestrator | 2026-03-16 01:19:23 | INFO  | date: 2026-03-15 2026-03-16 01:19:23.562607 | orchestrator | 2026-03-16 01:19:23 | INFO  | image: octavia-amphora-haproxy-2024.2.20260315.qcow2 2026-03-16 01:19:23.562946 | orchestrator | 2026-03-16 01:19:23 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260315.qcow2 2026-03-16 01:19:23.562966 | orchestrator | 2026-03-16 01:19:23 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260315.qcow2.CHECKSUM 2026-03-16 01:19:24.234872 | orchestrator | 2026-03-16 01:19:24 | INFO  | checksum: 728fc6daf4196a5cbdce417dabef1586d4b15af6201c6ad97fcb427a3e856422 2026-03-16 01:19:24.307125 | orchestrator | 2026-03-16 01:19:24 | INFO  | It takes a moment until task 3a1297b4-e282-4d8c-bbbc-10ca6abb2294 (image-manager) has been started and output is visible here. 2026-03-16 01:20:37.565465 | orchestrator | 2026-03-16 01:19:26 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-15' 2026-03-16 01:20:37.565566 | orchestrator | 2026-03-16 01:19:27 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260315.qcow2: 200 2026-03-16 01:20:37.565581 | orchestrator | 2026-03-16 01:19:27 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-15 2026-03-16 01:20:37.565591 | orchestrator | 2026-03-16 01:19:27 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260315.qcow2 2026-03-16 01:20:37.565601 | orchestrator | 2026-03-16 01:19:28 | INFO  | Waiting for image to leave queued state... 2026-03-16 01:20:37.565610 | orchestrator | 2026-03-16 01:19:30 | INFO  | Waiting for import to complete... 2026-03-16 01:20:37.565619 | orchestrator | 2026-03-16 01:19:41 | INFO  | Waiting for import to complete... 2026-03-16 01:20:37.565628 | orchestrator | 2026-03-16 01:19:51 | INFO  | Waiting for import to complete... 2026-03-16 01:20:37.565636 | orchestrator | 2026-03-16 01:20:01 | INFO  | Waiting for import to complete... 2026-03-16 01:20:37.565646 | orchestrator | 2026-03-16 01:20:11 | INFO  | Waiting for import to complete... 2026-03-16 01:20:37.565656 | orchestrator | 2026-03-16 01:20:21 | INFO  | Waiting for import to complete... 2026-03-16 01:20:37.565665 | orchestrator | 2026-03-16 01:20:31 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-15' successfully completed, reloading images 2026-03-16 01:20:37.565674 | orchestrator | 2026-03-16 01:20:32 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-15' 2026-03-16 01:20:37.565704 | orchestrator | 2026-03-16 01:20:32 | INFO  | Setting internal_version = 2026-03-15 2026-03-16 01:20:37.565714 | orchestrator | 2026-03-16 01:20:32 | INFO  | Setting image_original_user = ubuntu 2026-03-16 01:20:37.565722 | orchestrator | 2026-03-16 01:20:32 | INFO  | Adding tag amphora 2026-03-16 01:20:37.565731 | orchestrator | 2026-03-16 01:20:32 | INFO  | Adding tag os:ubuntu 2026-03-16 01:20:37.565740 | orchestrator | 2026-03-16 01:20:32 | INFO  | Setting property architecture: x86_64 2026-03-16 01:20:37.565748 | orchestrator | 2026-03-16 01:20:32 | INFO  | Setting property hw_disk_bus: scsi 2026-03-16 01:20:37.565757 | orchestrator | 2026-03-16 01:20:33 | INFO  | Setting property hw_rng_model: virtio 2026-03-16 01:20:37.565765 | orchestrator | 2026-03-16 01:20:33 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-16 01:20:37.565774 | orchestrator | 2026-03-16 01:20:33 | INFO  | Setting property hw_watchdog_action: reset 2026-03-16 01:20:37.565782 | orchestrator | 2026-03-16 01:20:33 | INFO  | Setting property hypervisor_type: qemu 2026-03-16 01:20:37.565791 | orchestrator | 2026-03-16 01:20:34 | INFO  | Setting property os_distro: ubuntu 2026-03-16 01:20:37.565813 | orchestrator | 2026-03-16 01:20:34 | INFO  | Setting property replace_frequency: quarterly 2026-03-16 01:20:37.565823 | orchestrator | 2026-03-16 01:20:34 | INFO  | Setting property uuid_validity: last-1 2026-03-16 01:20:37.565831 | orchestrator | 2026-03-16 01:20:34 | INFO  | Setting property provided_until: none 2026-03-16 01:20:37.565840 | orchestrator | 2026-03-16 01:20:35 | INFO  | Setting property os_purpose: network 2026-03-16 01:20:37.565863 | orchestrator | 2026-03-16 01:20:35 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-16 01:20:37.565872 | orchestrator | 2026-03-16 01:20:35 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-16 01:20:37.565880 | orchestrator | 2026-03-16 01:20:35 | INFO  | Setting property internal_version: 2026-03-15 2026-03-16 01:20:37.565889 | orchestrator | 2026-03-16 01:20:36 | INFO  | Setting property image_original_user: ubuntu 2026-03-16 01:20:37.565898 | orchestrator | 2026-03-16 01:20:36 | INFO  | Setting property os_version: 2026-03-15 2026-03-16 01:20:37.565907 | orchestrator | 2026-03-16 01:20:36 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260315.qcow2 2026-03-16 01:20:37.565915 | orchestrator | 2026-03-16 01:20:36 | INFO  | Setting property image_build_date: 2026-03-15 2026-03-16 01:20:37.565924 | orchestrator | 2026-03-16 01:20:37 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-15' 2026-03-16 01:20:37.565932 | orchestrator | 2026-03-16 01:20:37 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-15' 2026-03-16 01:20:37.565956 | orchestrator | 2026-03-16 01:20:37 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-16 01:20:37.565965 | orchestrator | 2026-03-16 01:20:37 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-16 01:20:37.565975 | orchestrator | 2026-03-16 01:20:37 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-16 01:20:37.565983 | orchestrator | 2026-03-16 01:20:37 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-16 01:20:38.265358 | orchestrator | ok: Runtime: 0:03:18.337632 2026-03-16 01:20:38.279114 | 2026-03-16 01:20:38.279234 | TASK [Run checks] 2026-03-16 01:20:39.034813 | orchestrator | + set -e 2026-03-16 01:20:39.036029 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-16 01:20:39.036100 | orchestrator | ++ export INTERACTIVE=false 2026-03-16 01:20:39.036120 | orchestrator | ++ INTERACTIVE=false 2026-03-16 01:20:39.036130 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-16 01:20:39.036139 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-16 01:20:39.036148 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-16 01:20:39.036971 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-16 01:20:39.041150 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-16 01:20:39.041221 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-16 01:20:39.041231 | orchestrator | + echo 2026-03-16 01:20:39.041239 | orchestrator | 2026-03-16 01:20:39.041245 | orchestrator | # CHECK 2026-03-16 01:20:39.041251 | orchestrator | 2026-03-16 01:20:39.041263 | orchestrator | + echo '# CHECK' 2026-03-16 01:20:39.041269 | orchestrator | + echo 2026-03-16 01:20:39.041286 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-16 01:20:39.042452 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-16 01:20:39.095878 | orchestrator | 2026-03-16 01:20:39.095968 | orchestrator | ## Containers @ testbed-manager 2026-03-16 01:20:39.095982 | orchestrator | 2026-03-16 01:20:39.095991 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-16 01:20:39.095998 | orchestrator | + echo 2026-03-16 01:20:39.096006 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-16 01:20:39.096013 | orchestrator | + echo 2026-03-16 01:20:39.096030 | orchestrator | + osism container testbed-manager ps 2026-03-16 01:20:41.121308 | orchestrator | 2026-03-16 01:20:41 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-16 01:20:41.522738 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-16 01:20:41.522825 | orchestrator | f7d2ddcb96b7 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2026-03-16 01:20:41.522836 | orchestrator | 61f190be56f9 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2026-03-16 01:20:41.522840 | orchestrator | 9480114898c8 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-16 01:20:41.522848 | orchestrator | 35168d633017 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-03-16 01:20:41.522852 | orchestrator | 85edd1cc6aea registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2026-03-16 01:20:41.522859 | orchestrator | 9ec60c6a396c registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 20 minutes ago Up 19 minutes cephclient 2026-03-16 01:20:41.522864 | orchestrator | caa6e3ed96fe registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-16 01:20:41.522868 | orchestrator | df41ea63b4b1 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2026-03-16 01:20:41.522888 | orchestrator | 6d18f648320c registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-03-16 01:20:41.522892 | orchestrator | aff18b3b12df phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 33 minutes ago Up 32 minutes (healthy) 80/tcp phpmyadmin 2026-03-16 01:20:41.522896 | orchestrator | ff2f5ee6ed77 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 33 minutes ago Up 33 minutes openstackclient 2026-03-16 01:20:41.522900 | orchestrator | 69aa3ace353b registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 34 minutes ago Up 33 minutes (healthy) 8080/tcp homer 2026-03-16 01:20:41.522904 | orchestrator | 79a14b1c2ffb registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 57 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-16 01:20:41.522911 | orchestrator | ea2bb0c61340 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" About an hour ago Up 40 minutes (healthy) manager-inventory_reconciler-1 2026-03-16 01:20:41.522927 | orchestrator | 258bfe4f4bcc registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-ansible 2026-03-16 01:20:41.522931 | orchestrator | f5827f81c3da registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) kolla-ansible 2026-03-16 01:20:41.522935 | orchestrator | fac5440cae17 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) ceph-ansible 2026-03-16 01:20:41.522939 | orchestrator | afeb04ae0fb0 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-kubernetes 2026-03-16 01:20:41.522943 | orchestrator | c06549be7163 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 41 minutes (healthy) 8000/tcp manager-ara-server-1 2026-03-16 01:20:41.522947 | orchestrator | 07e3b6add8f4 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 3306/tcp manager-mariadb-1 2026-03-16 01:20:41.522951 | orchestrator | 75ad62dbd0f6 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 6379/tcp manager-redis-1 2026-03-16 01:20:41.522955 | orchestrator | dbfb703bf132 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-flower-1 2026-03-16 01:20:41.522962 | orchestrator | 03a84d71a0a6 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" About an hour ago Up 41 minutes (healthy) osismclient 2026-03-16 01:20:41.522966 | orchestrator | 449f1260569e registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-16 01:20:41.522970 | orchestrator | 8926f730246c registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-listener-1 2026-03-16 01:20:41.522974 | orchestrator | c29d591f67f0 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-openstack-1 2026-03-16 01:20:41.522978 | orchestrator | 353ef2037384 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-beat-1 2026-03-16 01:20:41.522984 | orchestrator | 944b7dfe8ecb registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" About an hour ago Up 41 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-16 01:20:41.522988 | orchestrator | ac09dd14d725 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-16 01:20:41.836474 | orchestrator | 2026-03-16 01:20:41.836557 | orchestrator | ## Images @ testbed-manager 2026-03-16 01:20:41.836569 | orchestrator | 2026-03-16 01:20:41.836576 | orchestrator | + echo 2026-03-16 01:20:41.836584 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-16 01:20:41.836589 | orchestrator | + echo 2026-03-16 01:20:41.836593 | orchestrator | + osism container testbed-manager images 2026-03-16 01:20:44.284609 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-16 01:20:44.308191 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 9f242ea2af99 21 hours ago 239MB 2026-03-16 01:20:44.308238 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 6 weeks ago 41.4MB 2026-03-16 01:20:44.308243 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-16 01:20:44.308248 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 3 months ago 608MB 2026-03-16 01:20:44.308254 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-16 01:20:44.308258 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-16 01:20:44.308262 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-16 01:20:44.308266 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 3 months ago 308MB 2026-03-16 01:20:44.308270 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-16 01:20:44.308292 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 3 months ago 404MB 2026-03-16 01:20:44.308296 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 3 months ago 839MB 2026-03-16 01:20:44.308300 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-16 01:20:44.308304 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 3 months ago 330MB 2026-03-16 01:20:44.308308 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 3 months ago 613MB 2026-03-16 01:20:44.308311 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 3 months ago 560MB 2026-03-16 01:20:44.308315 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 3 months ago 1.23GB 2026-03-16 01:20:44.308319 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 3 months ago 383MB 2026-03-16 01:20:44.308342 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 3 months ago 238MB 2026-03-16 01:20:44.308346 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-16 01:20:44.308350 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-16 01:20:44.308354 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 6 months ago 275MB 2026-03-16 01:20:44.308357 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 7 months ago 226MB 2026-03-16 01:20:44.308361 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-16 01:20:44.308365 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-16 01:20:44.629675 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-16 01:20:44.630681 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-16 01:20:44.692350 | orchestrator | 2026-03-16 01:20:44.692434 | orchestrator | ## Containers @ testbed-node-0 2026-03-16 01:20:44.692442 | orchestrator | 2026-03-16 01:20:44.692446 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-16 01:20:44.692450 | orchestrator | + echo 2026-03-16 01:20:44.692455 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-16 01:20:44.692461 | orchestrator | + echo 2026-03-16 01:20:44.692464 | orchestrator | + osism container testbed-node-0 ps 2026-03-16 01:20:47.201647 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-16 01:20:47.201745 | orchestrator | a02ee400167a registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_worker 2026-03-16 01:20:47.201754 | orchestrator | 675d8294b2f1 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_housekeeping 2026-03-16 01:20:47.201758 | orchestrator | 893236d34e70 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_health_manager 2026-03-16 01:20:47.201763 | orchestrator | 7bf033789d1e registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes octavia_driver_agent 2026-03-16 01:20:47.201767 | orchestrator | 0d3ce7367b93 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) octavia_api 2026-03-16 01:20:47.201793 | orchestrator | c7e9580fece1 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-16 01:20:47.201797 | orchestrator | 75d440f7612b registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-16 01:20:47.201801 | orchestrator | 8638e33aec07 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-03-16 01:20:47.201805 | orchestrator | 6b5a4c112bc8 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_scheduler 2026-03-16 01:20:47.201809 | orchestrator | e8a65c233478 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes grafana 2026-03-16 01:20:47.201813 | orchestrator | e1091117f355 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-16 01:20:47.201817 | orchestrator | 774c281e2f4a registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-03-16 01:20:47.201821 | orchestrator | 06c84bfe4622 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-16 01:20:47.201825 | orchestrator | ee6df00bf25f registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-03-16 01:20:47.201829 | orchestrator | 7b77abf549c8 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-16 01:20:47.201833 | orchestrator | 2825708b7931 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-03-16 01:20:47.201845 | orchestrator | 83c6de05f00a registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-03-16 01:20:47.201849 | orchestrator | c65bcb96a424 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-03-16 01:20:47.201853 | orchestrator | 4f8da3900b36 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-03-16 01:20:47.201870 | orchestrator | b341c6ca7110 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-03-16 01:20:47.201875 | orchestrator | a22727651c4d registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_conductor 2026-03-16 01:20:47.201879 | orchestrator | de5ceac14d34 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) magnum_api 2026-03-16 01:20:47.201882 | orchestrator | d6cc794ec18f registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) neutron_server 2026-03-16 01:20:47.201890 | orchestrator | 7a25622de9b2 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_worker 2026-03-16 01:20:47.201894 | orchestrator | a918ee786270 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_mdns 2026-03-16 01:20:47.201898 | orchestrator | 916a80633f3c registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2026-03-16 01:20:47.201901 | orchestrator | 39054d0747b6 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2026-03-16 01:20:47.201908 | orchestrator | feb364fdd936 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2026-03-16 01:20:47.201912 | orchestrator | 553bdb79a32c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-16 01:20:47.201915 | orchestrator | 9013ba784b5a registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-16 01:20:47.201919 | orchestrator | ba8e13d67615 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) placement_api 2026-03-16 01:20:47.201923 | orchestrator | fb8d88c60bd6 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-03-16 01:20:47.201927 | orchestrator | 97d3e18d180a registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_api 2026-03-16 01:20:47.201931 | orchestrator | 806b38394e18 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-0 2026-03-16 01:20:47.201935 | orchestrator | 387d0fdb23a5 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-16 01:20:47.201939 | orchestrator | 6955bae7e265 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_fernet 2026-03-16 01:20:47.201942 | orchestrator | 4338a5d3f2fa registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2026-03-16 01:20:47.201946 | orchestrator | 86598a840eab registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2026-03-16 01:20:47.201950 | orchestrator | 05e7e9749d2d registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2026-03-16 01:20:47.201954 | orchestrator | dede929e9edb registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch_dashboards 2026-03-16 01:20:47.201962 | orchestrator | 3bda0c1f6ee4 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) opensearch 2026-03-16 01:20:47.201966 | orchestrator | 90db04af1b3e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-0 2026-03-16 01:20:47.201976 | orchestrator | 3ce120e53384 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2026-03-16 01:20:47.201980 | orchestrator | bb631bf49016 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2026-03-16 01:20:47.201984 | orchestrator | 10210af7c652 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-03-16 01:20:47.201987 | orchestrator | 868d0460981c registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2026-03-16 01:20:47.201991 | orchestrator | 08a794864b9e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2026-03-16 01:20:47.201995 | orchestrator | 6eb066a070cb registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2026-03-16 01:20:47.201999 | orchestrator | 09b2eea58edb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-0 2026-03-16 01:20:47.202002 | orchestrator | 5d8f3141c916 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-03-16 01:20:47.202006 | orchestrator | 76f8cc9fe4ad registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) rabbitmq 2026-03-16 01:20:47.202010 | orchestrator | 1629f6d0380f registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-16 01:20:47.202042 | orchestrator | 784cae8a1c72 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2026-03-16 01:20:47.202046 | orchestrator | 0caf0e52188c registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2026-03-16 01:20:47.202050 | orchestrator | ce321b2e8da7 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-16 01:20:47.202054 | orchestrator | 3c52582a595c registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-16 01:20:47.202061 | orchestrator | 75323abb2bd1 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-16 01:20:47.202065 | orchestrator | fee3a805c839 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-03-16 01:20:47.202069 | orchestrator | 8511b41b10ae registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-03-16 01:20:47.548439 | orchestrator | 2026-03-16 01:20:47.548510 | orchestrator | ## Images @ testbed-node-0 2026-03-16 01:20:47.548517 | orchestrator | 2026-03-16 01:20:47.548521 | orchestrator | + echo 2026-03-16 01:20:47.548549 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-16 01:20:47.548554 | orchestrator | + echo 2026-03-16 01:20:47.548558 | orchestrator | + osism container testbed-node-0 images 2026-03-16 01:20:49.969812 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-16 01:20:49.969949 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-16 01:20:49.969979 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-16 01:20:49.970005 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-16 01:20:49.970147 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-16 01:20:49.970170 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-16 01:20:49.970189 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-16 01:20:49.970207 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-16 01:20:49.970225 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-16 01:20:49.970243 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-16 01:20:49.970262 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-16 01:20:49.970282 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-16 01:20:49.970301 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-16 01:20:49.970351 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-16 01:20:49.970371 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-16 01:20:49.970391 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-16 01:20:49.970410 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-16 01:20:49.970429 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-16 01:20:49.970445 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-16 01:20:49.970475 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-16 01:20:49.970487 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-16 01:20:49.970498 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-16 01:20:49.970509 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-16 01:20:49.970520 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-16 01:20:49.970531 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-16 01:20:49.970542 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-16 01:20:49.970582 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-16 01:20:49.970600 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-16 01:20:49.970618 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-16 01:20:49.970637 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-16 01:20:49.970655 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-16 01:20:49.970673 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-16 01:20:49.970720 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-16 01:20:49.970734 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-16 01:20:49.970744 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-16 01:20:49.970755 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-16 01:20:49.970766 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-16 01:20:49.970776 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-16 01:20:49.970787 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-16 01:20:49.970798 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-16 01:20:49.970808 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-16 01:20:49.970819 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-16 01:20:49.970830 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-16 01:20:49.970840 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-16 01:20:49.970851 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-16 01:20:49.970861 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-16 01:20:49.970873 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-16 01:20:49.970884 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-16 01:20:49.970894 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-16 01:20:49.970905 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-16 01:20:49.970915 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-16 01:20:49.970926 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-16 01:20:49.970946 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-16 01:20:49.970957 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-16 01:20:49.970968 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-16 01:20:49.970979 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-16 01:20:49.970989 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-16 01:20:49.971000 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-16 01:20:49.971011 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-16 01:20:49.971021 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-16 01:20:49.971032 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-16 01:20:49.971043 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-16 01:20:49.971054 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-16 01:20:49.971065 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-16 01:20:49.971082 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-16 01:20:49.971093 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-16 01:20:50.297051 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-16 01:20:50.297138 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-16 01:20:50.346424 | orchestrator | 2026-03-16 01:20:50.346506 | orchestrator | ## Containers @ testbed-node-1 2026-03-16 01:20:50.346514 | orchestrator | 2026-03-16 01:20:50.346518 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-16 01:20:50.346523 | orchestrator | + echo 2026-03-16 01:20:50.346527 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-16 01:20:50.346533 | orchestrator | + echo 2026-03-16 01:20:50.346537 | orchestrator | + osism container testbed-node-1 ps 2026-03-16 01:20:52.744203 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-16 01:20:52.744350 | orchestrator | d5b30c7095f7 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_worker 2026-03-16 01:20:52.744376 | orchestrator | 00267a81f28a registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_housekeeping 2026-03-16 01:20:52.744391 | orchestrator | beb90b220b61 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_health_manager 2026-03-16 01:20:52.744407 | orchestrator | 1f0637483416 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 7 minutes ago Up 6 minutes octavia_driver_agent 2026-03-16 01:20:52.744458 | orchestrator | 984a11cd8053 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) octavia_api 2026-03-16 01:20:52.744499 | orchestrator | 9c918f7278e1 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-16 01:20:52.744509 | orchestrator | 7c72b5226b00 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-16 01:20:52.744518 | orchestrator | 14eb596cc2d5 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes grafana 2026-03-16 01:20:52.744527 | orchestrator | 1a7ce4a15442 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-03-16 01:20:52.744536 | orchestrator | 3c75e6fd0a86 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_scheduler 2026-03-16 01:20:52.744545 | orchestrator | 7bc976761c2a registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-16 01:20:52.744558 | orchestrator | 30f3abaaa866 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-03-16 01:20:52.744567 | orchestrator | 46f48e382a17 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-16 01:20:52.744575 | orchestrator | beeb10069059 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-16 01:20:52.744584 | orchestrator | 72e356be1129 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-03-16 01:20:52.744592 | orchestrator | b4278bde39c6 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-03-16 01:20:52.744604 | orchestrator | 58a7c5dd09e6 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-03-16 01:20:52.744613 | orchestrator | 09631fd2b7f1 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-03-16 01:20:52.744622 | orchestrator | e0849a742fcd registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-03-16 01:20:52.744649 | orchestrator | 32f7a6a00ca3 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-03-16 01:20:52.744659 | orchestrator | e2af6c7e3149 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_conductor 2026-03-16 01:20:52.744668 | orchestrator | 9c6e3d502b84 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) magnum_api 2026-03-16 01:20:52.744676 | orchestrator | 684773c34ee5 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) neutron_server 2026-03-16 01:20:52.744692 | orchestrator | bef8ee9312da registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_worker 2026-03-16 01:20:52.744701 | orchestrator | 8f0c95ce1c77 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_mdns 2026-03-16 01:20:52.744710 | orchestrator | 36e2c923c3ba registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2026-03-16 01:20:52.744725 | orchestrator | e3d35377fd56 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2026-03-16 01:20:52.744734 | orchestrator | 18ffc7e5842d registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2026-03-16 01:20:52.744743 | orchestrator | 0f6645ca3840 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-16 01:20:52.744752 | orchestrator | 6f210c0ce805 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-16 01:20:52.744760 | orchestrator | 941979a85489 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) placement_api 2026-03-16 01:20:52.744769 | orchestrator | 181452fc6f96 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_keystone_listener 2026-03-16 01:20:52.744777 | orchestrator | 962fcec86605 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_api 2026-03-16 01:20:52.744786 | orchestrator | 19079a53ef26 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-1 2026-03-16 01:20:52.744795 | orchestrator | 8a70dde4b403 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-16 01:20:52.744803 | orchestrator | 59f80843f2d2 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2026-03-16 01:20:52.744812 | orchestrator | a8564d539840 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_fernet 2026-03-16 01:20:52.744821 | orchestrator | dd5957f67829 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2026-03-16 01:20:52.744829 | orchestrator | f3c89751ba59 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2026-03-16 01:20:52.744838 | orchestrator | c88fed293b5e registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 24 minutes ago Up 24 minutes (healthy) mariadb 2026-03-16 01:20:52.744855 | orchestrator | 90245ed3b801 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-03-16 01:20:52.744870 | orchestrator | 6e97cafc2e2a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-1 2026-03-16 01:20:52.744879 | orchestrator | f16de9d71bb6 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2026-03-16 01:20:52.744888 | orchestrator | 2a6d1d657b0e registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2026-03-16 01:20:52.744896 | orchestrator | 4b6feec6907d registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-03-16 01:20:52.744905 | orchestrator | 3dceb8e36de9 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2026-03-16 01:20:52.744914 | orchestrator | f0bcc9168579 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2026-03-16 01:20:52.744922 | orchestrator | 09094e1e9247 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2026-03-16 01:20:52.744931 | orchestrator | 0aad7acb1a59 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-03-16 01:20:52.744940 | orchestrator | 1faaff9dcd0f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2026-03-16 01:20:52.744948 | orchestrator | fb32c1392897 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-1 2026-03-16 01:20:52.744961 | orchestrator | 29b196f03693 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-16 01:20:52.744970 | orchestrator | aa7ad3234b26 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2026-03-16 01:20:52.744979 | orchestrator | c15a0cb35400 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2026-03-16 01:20:52.744988 | orchestrator | 24f41dd38d79 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-16 01:20:52.744997 | orchestrator | 2801163b6ab4 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-16 01:20:52.745005 | orchestrator | 8209c235afe7 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-16 01:20:52.745014 | orchestrator | cad0d2f2b85e registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2026-03-16 01:20:52.745023 | orchestrator | c3fcc99495c9 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-03-16 01:20:53.071770 | orchestrator | 2026-03-16 01:20:53.071837 | orchestrator | ## Images @ testbed-node-1 2026-03-16 01:20:53.071883 | orchestrator | 2026-03-16 01:20:53.071892 | orchestrator | + echo 2026-03-16 01:20:53.071900 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-16 01:20:53.071909 | orchestrator | + echo 2026-03-16 01:20:53.071917 | orchestrator | + osism container testbed-node-1 images 2026-03-16 01:20:55.516401 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-16 01:20:55.516468 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-16 01:20:55.516474 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-16 01:20:55.516479 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-16 01:20:55.516483 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-16 01:20:55.516487 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-16 01:20:55.516492 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-16 01:20:55.516495 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-16 01:20:55.516499 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-16 01:20:55.516503 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-16 01:20:55.516507 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-16 01:20:55.516510 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-16 01:20:55.516514 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-16 01:20:55.516518 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-16 01:20:55.516522 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-16 01:20:55.516526 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-16 01:20:55.516530 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-16 01:20:55.516534 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-16 01:20:55.516538 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-16 01:20:55.516541 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-16 01:20:55.516545 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-16 01:20:55.516549 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-16 01:20:55.516553 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-16 01:20:55.516556 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-16 01:20:55.516560 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-16 01:20:55.516580 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-16 01:20:55.516584 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-16 01:20:55.516587 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-16 01:20:55.516591 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-16 01:20:55.516595 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-16 01:20:55.516611 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-16 01:20:55.516615 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-16 01:20:55.516629 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-16 01:20:55.516633 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-16 01:20:55.516637 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-16 01:20:55.516655 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-16 01:20:55.516659 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-16 01:20:55.516663 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-16 01:20:55.516666 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-16 01:20:55.516670 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-16 01:20:55.516674 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-16 01:20:55.516678 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-16 01:20:55.516681 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-16 01:20:55.516685 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-16 01:20:55.516692 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-16 01:20:55.516696 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-16 01:20:55.516700 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-16 01:20:55.516703 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-16 01:20:55.516707 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-16 01:20:55.516711 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-16 01:20:55.516715 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-16 01:20:55.516732 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-16 01:20:55.516735 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-16 01:20:55.516739 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-16 01:20:55.516743 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-16 01:20:55.516747 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-16 01:20:55.516751 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-16 01:20:55.516755 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-16 01:20:55.866538 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-16 01:20:55.867389 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-16 01:20:55.922570 | orchestrator | 2026-03-16 01:20:55.922685 | orchestrator | ## Containers @ testbed-node-2 2026-03-16 01:20:55.922713 | orchestrator | 2026-03-16 01:20:55.922734 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-16 01:20:55.922754 | orchestrator | + echo 2026-03-16 01:20:55.922774 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-16 01:20:55.922790 | orchestrator | + echo 2026-03-16 01:20:55.922801 | orchestrator | + osism container testbed-node-2 ps 2026-03-16 01:20:58.389858 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-16 01:20:58.389951 | orchestrator | 286164b9c0e5 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_worker 2026-03-16 01:20:58.389967 | orchestrator | 2fcf19ed4dd6 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_housekeeping 2026-03-16 01:20:58.389979 | orchestrator | bbbf5f46e88f registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_health_manager 2026-03-16 01:20:58.389989 | orchestrator | 7998e329d53f registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes octavia_driver_agent 2026-03-16 01:20:58.389999 | orchestrator | 1237521ea2ac registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) octavia_api 2026-03-16 01:20:58.390061 | orchestrator | 301a683c534b registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-16 01:20:58.390083 | orchestrator | ba1f9fd6d0eb registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-16 01:20:58.390097 | orchestrator | 4eecf2409b15 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-03-16 01:20:58.390112 | orchestrator | 57d2d9df290e registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes grafana 2026-03-16 01:20:58.390127 | orchestrator | 647283b6d2e6 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_scheduler 2026-03-16 01:20:58.390162 | orchestrator | 6e8487b4d927 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-03-16 01:20:58.390191 | orchestrator | 89748c4f9e97 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-03-16 01:20:58.390200 | orchestrator | 8b48db2561e0 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-16 01:20:58.390211 | orchestrator | 7e3297268b99 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-16 01:20:58.390221 | orchestrator | 89693f46bff3 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-03-16 01:20:58.390231 | orchestrator | 0ee91b21bf34 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-03-16 01:20:58.390244 | orchestrator | 6bdb538347ba registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-03-16 01:20:58.390256 | orchestrator | ca5fc48fdba9 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-03-16 01:20:58.390267 | orchestrator | f8d86a59dfa9 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-03-16 01:20:58.390320 | orchestrator | 8563dd895527 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-03-16 01:20:58.390333 | orchestrator | 5b8e8d55bcc1 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) magnum_conductor 2026-03-16 01:20:58.390342 | orchestrator | 9305f8ba8c1f registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) magnum_api 2026-03-16 01:20:58.390348 | orchestrator | 750c36b1ac74 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) neutron_server 2026-03-16 01:20:58.390355 | orchestrator | 21202d5bf75a registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_worker 2026-03-16 01:20:58.390361 | orchestrator | 52b12d102155 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_mdns 2026-03-16 01:20:58.390367 | orchestrator | 0083230743f3 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2026-03-16 01:20:58.390392 | orchestrator | 882af68e0c64 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2026-03-16 01:20:58.390398 | orchestrator | 59a4a8e5dcac registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2026-03-16 01:20:58.390404 | orchestrator | 9bfab61717dc registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-16 01:20:58.390417 | orchestrator | 6f23020af7dd registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-16 01:20:58.390424 | orchestrator | 3b4d263e6d23 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) placement_api 2026-03-16 01:20:58.390431 | orchestrator | 08075746dfc9 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_keystone_listener 2026-03-16 01:20:58.390438 | orchestrator | 32b6f6481aa8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-2 2026-03-16 01:20:58.390446 | orchestrator | cd164807be3e registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_api 2026-03-16 01:20:58.390453 | orchestrator | abc7bac28548 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone 2026-03-16 01:20:58.390460 | orchestrator | c011a33eeb71 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2026-03-16 01:20:58.390468 | orchestrator | b1d93388cc2b registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_fernet 2026-03-16 01:20:58.390475 | orchestrator | be59027765c4 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2026-03-16 01:20:58.390483 | orchestrator | 74874740aaa2 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2026-03-16 01:20:58.390490 | orchestrator | a8161c136e52 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2026-03-16 01:20:58.390504 | orchestrator | 40bba53a3fd8 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-03-16 01:20:58.390511 | orchestrator | 2953807916b1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-2 2026-03-16 01:20:58.390519 | orchestrator | b52c936a512b registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2026-03-16 01:20:58.390526 | orchestrator | 0489ca58044a registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2026-03-16 01:20:58.390533 | orchestrator | 6dcdce7b329e registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-03-16 01:20:58.390540 | orchestrator | d1aae3e8a5ce registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2026-03-16 01:20:58.390576 | orchestrator | b00777fbbf26 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2026-03-16 01:20:58.390590 | orchestrator | a35038308066 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2026-03-16 01:20:58.390598 | orchestrator | d3e9f7c7afa7 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2026-03-16 01:20:58.390605 | orchestrator | c77e7ea4a4f7 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-03-16 01:20:58.390612 | orchestrator | cdfddb202929 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-2 2026-03-16 01:20:58.390623 | orchestrator | d4790abeabed registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-16 01:20:58.390630 | orchestrator | 16f3a56bf7de registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_db 2026-03-16 01:20:58.390638 | orchestrator | 9f23311b2d3b registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-03-16 01:20:58.390645 | orchestrator | ed6b9e5d582d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-16 01:20:58.390652 | orchestrator | fbafec827c3c registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-16 01:20:58.390658 | orchestrator | 32354eaeac70 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-16 01:20:58.390664 | orchestrator | 572291cd1f50 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2026-03-16 01:20:58.390670 | orchestrator | 378a94ff3631 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-03-16 01:20:58.727535 | orchestrator | 2026-03-16 01:20:58.727601 | orchestrator | ## Images @ testbed-node-2 2026-03-16 01:20:58.727610 | orchestrator | 2026-03-16 01:20:58.727615 | orchestrator | + echo 2026-03-16 01:20:58.727621 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-16 01:20:58.727626 | orchestrator | + echo 2026-03-16 01:20:58.727632 | orchestrator | + osism container testbed-node-2 images 2026-03-16 01:21:01.238486 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-16 01:21:01.238573 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-16 01:21:01.238585 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-16 01:21:01.238593 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-16 01:21:01.238601 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-16 01:21:01.238609 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-16 01:21:01.238616 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-16 01:21:01.238644 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-16 01:21:01.238652 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-16 01:21:01.238659 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-16 01:21:01.238666 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-16 01:21:01.238673 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-16 01:21:01.238681 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-16 01:21:01.238688 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-16 01:21:01.238695 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-16 01:21:01.238702 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-16 01:21:01.238710 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-16 01:21:01.238717 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-16 01:21:01.238724 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-16 01:21:01.238731 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-16 01:21:01.238739 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-16 01:21:01.238746 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-16 01:21:01.238753 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-16 01:21:01.238761 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-16 01:21:01.238768 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-16 01:21:01.238775 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-16 01:21:01.238782 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-16 01:21:01.238790 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-16 01:21:01.238797 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-16 01:21:01.238804 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-16 01:21:01.238812 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-16 01:21:01.238819 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-16 01:21:01.238839 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-16 01:21:01.238853 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-16 01:21:01.238860 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-16 01:21:01.238867 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-16 01:21:01.238875 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-16 01:21:01.238882 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-16 01:21:01.238889 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-16 01:21:01.238896 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-16 01:21:01.238903 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-16 01:21:01.238924 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-16 01:21:01.238932 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-16 01:21:01.238940 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-16 01:21:01.238947 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-16 01:21:01.238954 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-16 01:21:01.238961 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-16 01:21:01.238969 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-16 01:21:01.238976 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-16 01:21:01.238983 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-16 01:21:01.238991 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-16 01:21:01.238998 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-16 01:21:01.239008 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-16 01:21:01.239016 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-16 01:21:01.239024 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-16 01:21:01.239032 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-16 01:21:01.239051 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-16 01:21:01.239060 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-16 01:21:01.574068 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-16 01:21:01.584652 | orchestrator | + set -e 2026-03-16 01:21:01.584770 | orchestrator | + source /opt/manager-vars.sh 2026-03-16 01:21:01.586140 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-16 01:21:01.586181 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-16 01:21:01.586190 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-16 01:21:01.586197 | orchestrator | ++ CEPH_VERSION=reef 2026-03-16 01:21:01.586214 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-16 01:21:01.586228 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-16 01:21:01.586247 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-16 01:21:01.586261 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-16 01:21:01.586273 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-16 01:21:01.586340 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-16 01:21:01.586354 | orchestrator | ++ export ARA=false 2026-03-16 01:21:01.586367 | orchestrator | ++ ARA=false 2026-03-16 01:21:01.586422 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-16 01:21:01.586436 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-16 01:21:01.586448 | orchestrator | ++ export TEMPEST=true 2026-03-16 01:21:01.586461 | orchestrator | ++ TEMPEST=true 2026-03-16 01:21:01.586470 | orchestrator | ++ export IS_ZUUL=true 2026-03-16 01:21:01.586477 | orchestrator | ++ IS_ZUUL=true 2026-03-16 01:21:01.586485 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 01:21:01.586492 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 01:21:01.586500 | orchestrator | ++ export EXTERNAL_API=false 2026-03-16 01:21:01.586507 | orchestrator | ++ EXTERNAL_API=false 2026-03-16 01:21:01.586514 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-16 01:21:01.586522 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-16 01:21:01.586596 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-16 01:21:01.586606 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-16 01:21:01.586613 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-16 01:21:01.586620 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-16 01:21:01.586628 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-16 01:21:01.586635 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-16 01:21:01.594233 | orchestrator | + set -e 2026-03-16 01:21:01.594332 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-16 01:21:01.594342 | orchestrator | ++ export INTERACTIVE=false 2026-03-16 01:21:01.594350 | orchestrator | ++ INTERACTIVE=false 2026-03-16 01:21:01.594358 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-16 01:21:01.594364 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-16 01:21:01.594372 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-16 01:21:01.595013 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-16 01:21:01.601835 | orchestrator | 2026-03-16 01:21:01.601910 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-16 01:21:01.601921 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-16 01:21:01.601929 | orchestrator | + echo 2026-03-16 01:21:01.601937 | orchestrator | + echo '# Ceph status' 2026-03-16 01:21:01.601945 | orchestrator | # Ceph status 2026-03-16 01:21:01.601952 | orchestrator | 2026-03-16 01:21:01.601960 | orchestrator | + echo 2026-03-16 01:21:01.601967 | orchestrator | + ceph -s 2026-03-16 01:21:02.197250 | orchestrator | cluster: 2026-03-16 01:21:02.197342 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-16 01:21:02.197350 | orchestrator | health: HEALTH_OK 2026-03-16 01:21:02.197355 | orchestrator | 2026-03-16 01:21:02.197360 | orchestrator | services: 2026-03-16 01:21:02.197364 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 30m) 2026-03-16 01:21:02.197381 | orchestrator | mgr: testbed-node-1(active, since 17m), standbys: testbed-node-2, testbed-node-0 2026-03-16 01:21:02.197389 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-16 01:21:02.197396 | orchestrator | osd: 6 osds: 6 up (since 26m), 6 in (since 27m) 2026-03-16 01:21:02.197402 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-16 01:21:02.197408 | orchestrator | 2026-03-16 01:21:02.197414 | orchestrator | data: 2026-03-16 01:21:02.197421 | orchestrator | volumes: 1/1 healthy 2026-03-16 01:21:02.197427 | orchestrator | pools: 14 pools, 401 pgs 2026-03-16 01:21:02.197433 | orchestrator | objects: 552 objects, 2.2 GiB 2026-03-16 01:21:02.197439 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-16 01:21:02.197446 | orchestrator | pgs: 401 active+clean 2026-03-16 01:21:02.197452 | orchestrator | 2026-03-16 01:21:02.255431 | orchestrator | 2026-03-16 01:21:02.255509 | orchestrator | # Ceph versions 2026-03-16 01:21:02.255519 | orchestrator | 2026-03-16 01:21:02.255526 | orchestrator | + echo 2026-03-16 01:21:02.255533 | orchestrator | + echo '# Ceph versions' 2026-03-16 01:21:02.255542 | orchestrator | + echo 2026-03-16 01:21:02.255572 | orchestrator | + ceph versions 2026-03-16 01:21:02.851363 | orchestrator | { 2026-03-16 01:21:02.851438 | orchestrator | "mon": { 2026-03-16 01:21:02.851447 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-16 01:21:02.851454 | orchestrator | }, 2026-03-16 01:21:02.851460 | orchestrator | "mgr": { 2026-03-16 01:21:02.851465 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-16 01:21:02.851471 | orchestrator | }, 2026-03-16 01:21:02.851476 | orchestrator | "osd": { 2026-03-16 01:21:02.851482 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-16 01:21:02.851487 | orchestrator | }, 2026-03-16 01:21:02.851492 | orchestrator | "mds": { 2026-03-16 01:21:02.851497 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-16 01:21:02.851503 | orchestrator | }, 2026-03-16 01:21:02.851508 | orchestrator | "rgw": { 2026-03-16 01:21:02.851513 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-16 01:21:02.851518 | orchestrator | }, 2026-03-16 01:21:02.851524 | orchestrator | "overall": { 2026-03-16 01:21:02.851530 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-16 01:21:02.851535 | orchestrator | } 2026-03-16 01:21:02.851541 | orchestrator | } 2026-03-16 01:21:02.898326 | orchestrator | 2026-03-16 01:21:02.898424 | orchestrator | # Ceph OSD tree 2026-03-16 01:21:02.898435 | orchestrator | 2026-03-16 01:21:02.898444 | orchestrator | + echo 2026-03-16 01:21:02.898453 | orchestrator | + echo '# Ceph OSD tree' 2026-03-16 01:21:02.898462 | orchestrator | + echo 2026-03-16 01:21:02.898469 | orchestrator | + ceph osd df tree 2026-03-16 01:21:03.426551 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-16 01:21:03.426623 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-03-16 01:21:03.426630 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-03-16 01:21:03.426635 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.40 0.91 189 up osd.0 2026-03-16 01:21:03.426640 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.43 1.09 201 up osd.3 2026-03-16 01:21:03.426645 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-03-16 01:21:03.426650 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 6.00 1.01 192 up osd.1 2026-03-16 01:21:03.426655 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.83 0.99 196 up osd.4 2026-03-16 01:21:03.426660 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-03-16 01:21:03.426665 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.53 1.10 205 up osd.2 2026-03-16 01:21:03.426670 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1011 MiB 1 KiB 74 MiB 19 GiB 5.30 0.90 187 up osd.5 2026-03-16 01:21:03.426675 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-03-16 01:21:03.426680 | orchestrator | MIN/MAX VAR: 0.90/1.10 STDDEV: 0.47 2026-03-16 01:21:03.468753 | orchestrator | 2026-03-16 01:21:03.468824 | orchestrator | # Ceph monitor status 2026-03-16 01:21:03.468833 | orchestrator | 2026-03-16 01:21:03.468839 | orchestrator | + echo 2026-03-16 01:21:03.468846 | orchestrator | + echo '# Ceph monitor status' 2026-03-16 01:21:03.468853 | orchestrator | + echo 2026-03-16 01:21:03.468859 | orchestrator | + ceph mon stat 2026-03-16 01:21:04.056937 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-16 01:21:04.100610 | orchestrator | 2026-03-16 01:21:04.100691 | orchestrator | # Ceph quorum status 2026-03-16 01:21:04.100708 | orchestrator | 2026-03-16 01:21:04.100718 | orchestrator | + echo 2026-03-16 01:21:04.100730 | orchestrator | + echo '# Ceph quorum status' 2026-03-16 01:21:04.100739 | orchestrator | + echo 2026-03-16 01:21:04.100792 | orchestrator | + ceph quorum_status 2026-03-16 01:21:04.100917 | orchestrator | + jq 2026-03-16 01:21:04.752759 | orchestrator | { 2026-03-16 01:21:04.752860 | orchestrator | "election_epoch": 4, 2026-03-16 01:21:04.752870 | orchestrator | "quorum": [ 2026-03-16 01:21:04.752875 | orchestrator | 0, 2026-03-16 01:21:04.752880 | orchestrator | 1, 2026-03-16 01:21:04.752884 | orchestrator | 2 2026-03-16 01:21:04.752888 | orchestrator | ], 2026-03-16 01:21:04.752892 | orchestrator | "quorum_names": [ 2026-03-16 01:21:04.752896 | orchestrator | "testbed-node-0", 2026-03-16 01:21:04.752900 | orchestrator | "testbed-node-1", 2026-03-16 01:21:04.752905 | orchestrator | "testbed-node-2" 2026-03-16 01:21:04.752908 | orchestrator | ], 2026-03-16 01:21:04.752913 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-16 01:21:04.752917 | orchestrator | "quorum_age": 1833, 2026-03-16 01:21:04.752922 | orchestrator | "features": { 2026-03-16 01:21:04.752926 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-16 01:21:04.752930 | orchestrator | "quorum_mon": [ 2026-03-16 01:21:04.752934 | orchestrator | "kraken", 2026-03-16 01:21:04.752938 | orchestrator | "luminous", 2026-03-16 01:21:04.752942 | orchestrator | "mimic", 2026-03-16 01:21:04.752946 | orchestrator | "osdmap-prune", 2026-03-16 01:21:04.752950 | orchestrator | "nautilus", 2026-03-16 01:21:04.752954 | orchestrator | "octopus", 2026-03-16 01:21:04.752958 | orchestrator | "pacific", 2026-03-16 01:21:04.752962 | orchestrator | "elector-pinging", 2026-03-16 01:21:04.752966 | orchestrator | "quincy", 2026-03-16 01:21:04.752970 | orchestrator | "reef" 2026-03-16 01:21:04.752974 | orchestrator | ] 2026-03-16 01:21:04.752978 | orchestrator | }, 2026-03-16 01:21:04.752982 | orchestrator | "monmap": { 2026-03-16 01:21:04.752986 | orchestrator | "epoch": 1, 2026-03-16 01:21:04.752990 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-16 01:21:04.752994 | orchestrator | "modified": "2026-03-16T00:50:16.914064Z", 2026-03-16 01:21:04.752999 | orchestrator | "created": "2026-03-16T00:50:16.914064Z", 2026-03-16 01:21:04.753003 | orchestrator | "min_mon_release": 18, 2026-03-16 01:21:04.753006 | orchestrator | "min_mon_release_name": "reef", 2026-03-16 01:21:04.753010 | orchestrator | "election_strategy": 1, 2026-03-16 01:21:04.753014 | orchestrator | "disallowed_leaders: ": "", 2026-03-16 01:21:04.753018 | orchestrator | "stretch_mode": false, 2026-03-16 01:21:04.753029 | orchestrator | "tiebreaker_mon": "", 2026-03-16 01:21:04.753033 | orchestrator | "removed_ranks: ": "", 2026-03-16 01:21:04.753038 | orchestrator | "features": { 2026-03-16 01:21:04.753045 | orchestrator | "persistent": [ 2026-03-16 01:21:04.753051 | orchestrator | "kraken", 2026-03-16 01:21:04.753061 | orchestrator | "luminous", 2026-03-16 01:21:04.753068 | orchestrator | "mimic", 2026-03-16 01:21:04.753075 | orchestrator | "osdmap-prune", 2026-03-16 01:21:04.753089 | orchestrator | "nautilus", 2026-03-16 01:21:04.753095 | orchestrator | "octopus", 2026-03-16 01:21:04.753101 | orchestrator | "pacific", 2026-03-16 01:21:04.753108 | orchestrator | "elector-pinging", 2026-03-16 01:21:04.753114 | orchestrator | "quincy", 2026-03-16 01:21:04.753120 | orchestrator | "reef" 2026-03-16 01:21:04.753126 | orchestrator | ], 2026-03-16 01:21:04.753133 | orchestrator | "optional": [] 2026-03-16 01:21:04.753138 | orchestrator | }, 2026-03-16 01:21:04.753145 | orchestrator | "mons": [ 2026-03-16 01:21:04.753151 | orchestrator | { 2026-03-16 01:21:04.753157 | orchestrator | "rank": 0, 2026-03-16 01:21:04.753164 | orchestrator | "name": "testbed-node-0", 2026-03-16 01:21:04.753171 | orchestrator | "public_addrs": { 2026-03-16 01:21:04.753175 | orchestrator | "addrvec": [ 2026-03-16 01:21:04.753179 | orchestrator | { 2026-03-16 01:21:04.753185 | orchestrator | "type": "v2", 2026-03-16 01:21:04.753193 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-16 01:21:04.753202 | orchestrator | "nonce": 0 2026-03-16 01:21:04.753209 | orchestrator | }, 2026-03-16 01:21:04.753215 | orchestrator | { 2026-03-16 01:21:04.753220 | orchestrator | "type": "v1", 2026-03-16 01:21:04.753226 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-16 01:21:04.753233 | orchestrator | "nonce": 0 2026-03-16 01:21:04.753239 | orchestrator | } 2026-03-16 01:21:04.753269 | orchestrator | ] 2026-03-16 01:21:04.753276 | orchestrator | }, 2026-03-16 01:21:04.753353 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-16 01:21:04.753358 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-16 01:21:04.753363 | orchestrator | "priority": 0, 2026-03-16 01:21:04.753387 | orchestrator | "weight": 0, 2026-03-16 01:21:04.753392 | orchestrator | "crush_location": "{}" 2026-03-16 01:21:04.753397 | orchestrator | }, 2026-03-16 01:21:04.753401 | orchestrator | { 2026-03-16 01:21:04.753405 | orchestrator | "rank": 1, 2026-03-16 01:21:04.753409 | orchestrator | "name": "testbed-node-1", 2026-03-16 01:21:04.753413 | orchestrator | "public_addrs": { 2026-03-16 01:21:04.753417 | orchestrator | "addrvec": [ 2026-03-16 01:21:04.753421 | orchestrator | { 2026-03-16 01:21:04.753426 | orchestrator | "type": "v2", 2026-03-16 01:21:04.753430 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-16 01:21:04.753434 | orchestrator | "nonce": 0 2026-03-16 01:21:04.753438 | orchestrator | }, 2026-03-16 01:21:04.753442 | orchestrator | { 2026-03-16 01:21:04.753446 | orchestrator | "type": "v1", 2026-03-16 01:21:04.753450 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-16 01:21:04.753455 | orchestrator | "nonce": 0 2026-03-16 01:21:04.753459 | orchestrator | } 2026-03-16 01:21:04.753463 | orchestrator | ] 2026-03-16 01:21:04.753467 | orchestrator | }, 2026-03-16 01:21:04.753471 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-16 01:21:04.753475 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-16 01:21:04.753479 | orchestrator | "priority": 0, 2026-03-16 01:21:04.753483 | orchestrator | "weight": 0, 2026-03-16 01:21:04.753488 | orchestrator | "crush_location": "{}" 2026-03-16 01:21:04.753492 | orchestrator | }, 2026-03-16 01:21:04.753496 | orchestrator | { 2026-03-16 01:21:04.753500 | orchestrator | "rank": 2, 2026-03-16 01:21:04.753504 | orchestrator | "name": "testbed-node-2", 2026-03-16 01:21:04.753508 | orchestrator | "public_addrs": { 2026-03-16 01:21:04.753512 | orchestrator | "addrvec": [ 2026-03-16 01:21:04.753516 | orchestrator | { 2026-03-16 01:21:04.753520 | orchestrator | "type": "v2", 2026-03-16 01:21:04.753524 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-16 01:21:04.753528 | orchestrator | "nonce": 0 2026-03-16 01:21:04.753533 | orchestrator | }, 2026-03-16 01:21:04.753537 | orchestrator | { 2026-03-16 01:21:04.753541 | orchestrator | "type": "v1", 2026-03-16 01:21:04.753548 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-16 01:21:04.753552 | orchestrator | "nonce": 0 2026-03-16 01:21:04.753556 | orchestrator | } 2026-03-16 01:21:04.753560 | orchestrator | ] 2026-03-16 01:21:04.753564 | orchestrator | }, 2026-03-16 01:21:04.753568 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-16 01:21:04.753572 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-16 01:21:04.753577 | orchestrator | "priority": 0, 2026-03-16 01:21:04.753581 | orchestrator | "weight": 0, 2026-03-16 01:21:04.753585 | orchestrator | "crush_location": "{}" 2026-03-16 01:21:04.753589 | orchestrator | } 2026-03-16 01:21:04.753593 | orchestrator | ] 2026-03-16 01:21:04.753597 | orchestrator | } 2026-03-16 01:21:04.753602 | orchestrator | } 2026-03-16 01:21:04.753690 | orchestrator | 2026-03-16 01:21:04.753696 | orchestrator | # Ceph free space status 2026-03-16 01:21:04.753700 | orchestrator | 2026-03-16 01:21:04.753704 | orchestrator | + echo 2026-03-16 01:21:04.753708 | orchestrator | + echo '# Ceph free space status' 2026-03-16 01:21:04.753712 | orchestrator | + echo 2026-03-16 01:21:04.753716 | orchestrator | + ceph df 2026-03-16 01:21:05.352531 | orchestrator | --- RAW STORAGE --- 2026-03-16 01:21:05.352602 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-16 01:21:05.352620 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-16 01:21:05.352624 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-16 01:21:05.352628 | orchestrator | 2026-03-16 01:21:05.352633 | orchestrator | --- POOLS --- 2026-03-16 01:21:05.352638 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-16 01:21:05.352644 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-16 01:21:05.352648 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-16 01:21:05.352652 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-16 01:21:05.352656 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-16 01:21:05.352678 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-16 01:21:05.352682 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-16 01:21:05.352686 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-16 01:21:05.352690 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-16 01:21:05.352694 | orchestrator | .rgw.root 9 32 1.4 KiB 4 32 KiB 0 53 GiB 2026-03-16 01:21:05.352698 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-16 01:21:05.352701 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-16 01:21:05.352705 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.91 35 GiB 2026-03-16 01:21:05.352709 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-16 01:21:05.352713 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-16 01:21:05.398669 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-16 01:21:05.443672 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-16 01:21:05.443753 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-03-16 01:21:05.443764 | orchestrator | + osism apply facts 2026-03-16 01:21:07.560475 | orchestrator | 2026-03-16 01:21:07 | INFO  | Task 4a55b371-8df2-4d4b-995d-6da200a4d3c1 (facts) was prepared for execution. 2026-03-16 01:21:07.560561 | orchestrator | 2026-03-16 01:21:07 | INFO  | It takes a moment until task 4a55b371-8df2-4d4b-995d-6da200a4d3c1 (facts) has been started and output is visible here. 2026-03-16 01:21:22.450149 | orchestrator | 2026-03-16 01:21:22.450339 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-16 01:21:22.450367 | orchestrator | 2026-03-16 01:21:22.450383 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-16 01:21:22.450399 | orchestrator | Monday 16 March 2026 01:21:11 +0000 (0:00:00.272) 0:00:00.272 ********** 2026-03-16 01:21:22.450415 | orchestrator | ok: [testbed-manager] 2026-03-16 01:21:22.450434 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:22.450450 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:21:22.450465 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:21:22.450480 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:21:22.450495 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:21:22.450511 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:21:22.450526 | orchestrator | 2026-03-16 01:21:22.450542 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-16 01:21:22.450551 | orchestrator | Monday 16 March 2026 01:21:13 +0000 (0:00:01.556) 0:00:01.829 ********** 2026-03-16 01:21:22.450562 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:21:22.450573 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:22.450583 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:21:22.450593 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:21:22.450603 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:21:22.450614 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:21:22.450624 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:21:22.450633 | orchestrator | 2026-03-16 01:21:22.450643 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-16 01:21:22.450653 | orchestrator | 2026-03-16 01:21:22.450664 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-16 01:21:22.450674 | orchestrator | Monday 16 March 2026 01:21:14 +0000 (0:00:01.346) 0:00:03.175 ********** 2026-03-16 01:21:22.450683 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:22.450694 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:21:22.450704 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:21:22.450714 | orchestrator | ok: [testbed-manager] 2026-03-16 01:21:22.450724 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:21:22.450734 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:21:22.450744 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:21:22.450780 | orchestrator | 2026-03-16 01:21:22.450791 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-16 01:21:22.450801 | orchestrator | 2026-03-16 01:21:22.450811 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-16 01:21:22.450821 | orchestrator | Monday 16 March 2026 01:21:21 +0000 (0:00:06.486) 0:00:09.662 ********** 2026-03-16 01:21:22.450831 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:21:22.450841 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:22.450851 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:21:22.450861 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:21:22.450869 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:21:22.450878 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:21:22.450886 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:21:22.450895 | orchestrator | 2026-03-16 01:21:22.450917 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:21:22.450926 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:21:22.450936 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:21:22.450945 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:21:22.450954 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:21:22.450963 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:21:22.450972 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:21:22.450981 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:21:22.450989 | orchestrator | 2026-03-16 01:21:22.450998 | orchestrator | 2026-03-16 01:21:22.451007 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:21:22.451015 | orchestrator | Monday 16 March 2026 01:21:21 +0000 (0:00:00.626) 0:00:10.289 ********** 2026-03-16 01:21:22.451024 | orchestrator | =============================================================================== 2026-03-16 01:21:22.451032 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.49s 2026-03-16 01:21:22.451041 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.56s 2026-03-16 01:21:22.451050 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.35s 2026-03-16 01:21:22.451058 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-03-16 01:21:22.771142 | orchestrator | + osism validate ceph-mons 2026-03-16 01:21:55.023751 | orchestrator | 2026-03-16 01:21:55.023945 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-16 01:21:55.023962 | orchestrator | 2026-03-16 01:21:55.023973 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-16 01:21:55.023983 | orchestrator | Monday 16 March 2026 01:21:39 +0000 (0:00:00.459) 0:00:00.459 ********** 2026-03-16 01:21:55.023994 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:21:55.024003 | orchestrator | 2026-03-16 01:21:55.024014 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-16 01:21:55.024034 | orchestrator | Monday 16 March 2026 01:21:40 +0000 (0:00:00.828) 0:00:01.288 ********** 2026-03-16 01:21:55.024044 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:21:55.024054 | orchestrator | 2026-03-16 01:21:55.024063 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-16 01:21:55.024169 | orchestrator | Monday 16 March 2026 01:21:41 +0000 (0:00:00.948) 0:00:02.236 ********** 2026-03-16 01:21:55.024220 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.024238 | orchestrator | 2026-03-16 01:21:55.024255 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-16 01:21:55.024271 | orchestrator | Monday 16 March 2026 01:21:41 +0000 (0:00:00.139) 0:00:02.375 ********** 2026-03-16 01:21:55.024286 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.024303 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:21:55.024317 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:21:55.024331 | orchestrator | 2026-03-16 01:21:55.024346 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-16 01:21:55.024360 | orchestrator | Monday 16 March 2026 01:21:41 +0000 (0:00:00.308) 0:00:02.684 ********** 2026-03-16 01:21:55.024375 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.024392 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:21:55.024407 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:21:55.024423 | orchestrator | 2026-03-16 01:21:55.024439 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-16 01:21:55.024454 | orchestrator | Monday 16 March 2026 01:21:42 +0000 (0:00:01.078) 0:00:03.763 ********** 2026-03-16 01:21:55.024469 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.024485 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:21:55.024500 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:21:55.024516 | orchestrator | 2026-03-16 01:21:55.024531 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-16 01:21:55.024548 | orchestrator | Monday 16 March 2026 01:21:43 +0000 (0:00:00.289) 0:00:04.052 ********** 2026-03-16 01:21:55.024565 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.024579 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:21:55.024593 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:21:55.024608 | orchestrator | 2026-03-16 01:21:55.024624 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-16 01:21:55.024640 | orchestrator | Monday 16 March 2026 01:21:43 +0000 (0:00:00.476) 0:00:04.529 ********** 2026-03-16 01:21:55.024656 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.024672 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:21:55.024688 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:21:55.024704 | orchestrator | 2026-03-16 01:21:55.024720 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-16 01:21:55.024737 | orchestrator | Monday 16 March 2026 01:21:43 +0000 (0:00:00.330) 0:00:04.860 ********** 2026-03-16 01:21:55.024753 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.024769 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:21:55.024784 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:21:55.024800 | orchestrator | 2026-03-16 01:21:55.024817 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-16 01:21:55.024834 | orchestrator | Monday 16 March 2026 01:21:44 +0000 (0:00:00.290) 0:00:05.150 ********** 2026-03-16 01:21:55.024851 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.024866 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:21:55.024883 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:21:55.024899 | orchestrator | 2026-03-16 01:21:55.024914 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-16 01:21:55.024932 | orchestrator | Monday 16 March 2026 01:21:44 +0000 (0:00:00.480) 0:00:05.631 ********** 2026-03-16 01:21:55.024949 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.024966 | orchestrator | 2026-03-16 01:21:55.024983 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-16 01:21:55.024996 | orchestrator | Monday 16 March 2026 01:21:44 +0000 (0:00:00.252) 0:00:05.883 ********** 2026-03-16 01:21:55.025006 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.025015 | orchestrator | 2026-03-16 01:21:55.025025 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-16 01:21:55.025093 | orchestrator | Monday 16 March 2026 01:21:45 +0000 (0:00:00.279) 0:00:06.163 ********** 2026-03-16 01:21:55.025120 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.025130 | orchestrator | 2026-03-16 01:21:55.025140 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:21:55.025151 | orchestrator | Monday 16 March 2026 01:21:45 +0000 (0:00:00.258) 0:00:06.422 ********** 2026-03-16 01:21:55.025160 | orchestrator | 2026-03-16 01:21:55.025170 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:21:55.025213 | orchestrator | Monday 16 March 2026 01:21:45 +0000 (0:00:00.074) 0:00:06.497 ********** 2026-03-16 01:21:55.025226 | orchestrator | 2026-03-16 01:21:55.025242 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:21:55.025258 | orchestrator | Monday 16 March 2026 01:21:45 +0000 (0:00:00.071) 0:00:06.568 ********** 2026-03-16 01:21:55.025273 | orchestrator | 2026-03-16 01:21:55.025289 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-16 01:21:55.025307 | orchestrator | Monday 16 March 2026 01:21:45 +0000 (0:00:00.074) 0:00:06.642 ********** 2026-03-16 01:21:55.025323 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.025340 | orchestrator | 2026-03-16 01:21:55.025351 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-16 01:21:55.025360 | orchestrator | Monday 16 March 2026 01:21:45 +0000 (0:00:00.250) 0:00:06.893 ********** 2026-03-16 01:21:55.025370 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.025380 | orchestrator | 2026-03-16 01:21:55.025414 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-16 01:21:55.025424 | orchestrator | Monday 16 March 2026 01:21:46 +0000 (0:00:00.256) 0:00:07.149 ********** 2026-03-16 01:21:55.025434 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.025444 | orchestrator | 2026-03-16 01:21:55.025454 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-16 01:21:55.025464 | orchestrator | Monday 16 March 2026 01:21:46 +0000 (0:00:00.114) 0:00:07.263 ********** 2026-03-16 01:21:55.025474 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:21:55.025484 | orchestrator | 2026-03-16 01:21:55.025493 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-16 01:21:55.025503 | orchestrator | Monday 16 March 2026 01:21:47 +0000 (0:00:01.610) 0:00:08.874 ********** 2026-03-16 01:21:55.025513 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.025523 | orchestrator | 2026-03-16 01:21:55.025532 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-16 01:21:55.025542 | orchestrator | Monday 16 March 2026 01:21:48 +0000 (0:00:00.520) 0:00:09.395 ********** 2026-03-16 01:21:55.025552 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.025561 | orchestrator | 2026-03-16 01:21:55.025571 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-16 01:21:55.025581 | orchestrator | Monday 16 March 2026 01:21:48 +0000 (0:00:00.143) 0:00:09.538 ********** 2026-03-16 01:21:55.025591 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.025660 | orchestrator | 2026-03-16 01:21:55.025673 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-16 01:21:55.025683 | orchestrator | Monday 16 March 2026 01:21:48 +0000 (0:00:00.341) 0:00:09.880 ********** 2026-03-16 01:21:55.025693 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.025702 | orchestrator | 2026-03-16 01:21:55.025712 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-16 01:21:55.025722 | orchestrator | Monday 16 March 2026 01:21:49 +0000 (0:00:00.298) 0:00:10.179 ********** 2026-03-16 01:21:55.025731 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.025741 | orchestrator | 2026-03-16 01:21:55.025751 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-16 01:21:55.025760 | orchestrator | Monday 16 March 2026 01:21:49 +0000 (0:00:00.111) 0:00:10.290 ********** 2026-03-16 01:21:55.025770 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.025780 | orchestrator | 2026-03-16 01:21:55.025790 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-16 01:21:55.025808 | orchestrator | Monday 16 March 2026 01:21:49 +0000 (0:00:00.140) 0:00:10.431 ********** 2026-03-16 01:21:55.025818 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.025828 | orchestrator | 2026-03-16 01:21:55.025838 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-16 01:21:55.025847 | orchestrator | Monday 16 March 2026 01:21:49 +0000 (0:00:00.118) 0:00:10.549 ********** 2026-03-16 01:21:55.025857 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:21:55.025867 | orchestrator | 2026-03-16 01:21:55.025876 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-16 01:21:55.025886 | orchestrator | Monday 16 March 2026 01:21:50 +0000 (0:00:01.391) 0:00:11.941 ********** 2026-03-16 01:21:55.025895 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.025905 | orchestrator | 2026-03-16 01:21:55.025915 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-16 01:21:55.025925 | orchestrator | Monday 16 March 2026 01:21:51 +0000 (0:00:00.313) 0:00:12.254 ********** 2026-03-16 01:21:55.025934 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.025944 | orchestrator | 2026-03-16 01:21:55.025960 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-16 01:21:55.025970 | orchestrator | Monday 16 March 2026 01:21:51 +0000 (0:00:00.140) 0:00:12.394 ********** 2026-03-16 01:21:55.025980 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:21:55.025990 | orchestrator | 2026-03-16 01:21:55.025999 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-16 01:21:55.026009 | orchestrator | Monday 16 March 2026 01:21:51 +0000 (0:00:00.151) 0:00:12.546 ********** 2026-03-16 01:21:55.026100 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.026111 | orchestrator | 2026-03-16 01:21:55.026121 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-16 01:21:55.026130 | orchestrator | Monday 16 March 2026 01:21:51 +0000 (0:00:00.130) 0:00:12.677 ********** 2026-03-16 01:21:55.026140 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.026150 | orchestrator | 2026-03-16 01:21:55.026160 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-16 01:21:55.026169 | orchestrator | Monday 16 March 2026 01:21:52 +0000 (0:00:00.312) 0:00:12.990 ********** 2026-03-16 01:21:55.026234 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:21:55.026253 | orchestrator | 2026-03-16 01:21:55.026268 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-16 01:21:55.026284 | orchestrator | Monday 16 March 2026 01:21:52 +0000 (0:00:00.247) 0:00:13.238 ********** 2026-03-16 01:21:55.026302 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:21:55.026318 | orchestrator | 2026-03-16 01:21:55.026335 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-16 01:21:55.026352 | orchestrator | Monday 16 March 2026 01:21:52 +0000 (0:00:00.272) 0:00:13.510 ********** 2026-03-16 01:21:55.026366 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:21:55.026376 | orchestrator | 2026-03-16 01:21:55.026386 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-16 01:21:55.026399 | orchestrator | Monday 16 March 2026 01:21:54 +0000 (0:00:01.746) 0:00:15.257 ********** 2026-03-16 01:21:55.026409 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:21:55.026418 | orchestrator | 2026-03-16 01:21:55.026428 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-16 01:21:55.026437 | orchestrator | Monday 16 March 2026 01:21:54 +0000 (0:00:00.268) 0:00:15.526 ********** 2026-03-16 01:21:55.026447 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:21:55.026457 | orchestrator | 2026-03-16 01:21:55.026478 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:21:57.593239 | orchestrator | Monday 16 March 2026 01:21:54 +0000 (0:00:00.259) 0:00:15.786 ********** 2026-03-16 01:21:57.593333 | orchestrator | 2026-03-16 01:21:57.593345 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:21:57.593378 | orchestrator | Monday 16 March 2026 01:21:54 +0000 (0:00:00.069) 0:00:15.855 ********** 2026-03-16 01:21:57.593387 | orchestrator | 2026-03-16 01:21:57.593396 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:21:57.593405 | orchestrator | Monday 16 March 2026 01:21:54 +0000 (0:00:00.068) 0:00:15.924 ********** 2026-03-16 01:21:57.593413 | orchestrator | 2026-03-16 01:21:57.593422 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-16 01:21:57.593431 | orchestrator | Monday 16 March 2026 01:21:55 +0000 (0:00:00.072) 0:00:15.997 ********** 2026-03-16 01:21:57.593439 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:21:57.593448 | orchestrator | 2026-03-16 01:21:57.593457 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-16 01:21:57.593465 | orchestrator | Monday 16 March 2026 01:21:56 +0000 (0:00:01.518) 0:00:17.515 ********** 2026-03-16 01:21:57.593474 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-16 01:21:57.593483 | orchestrator |  "msg": [ 2026-03-16 01:21:57.593494 | orchestrator |  "Validator run completed.", 2026-03-16 01:21:57.593503 | orchestrator |  "You can find the report file here:", 2026-03-16 01:21:57.593512 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-16T01:21:40+00:00-report.json", 2026-03-16 01:21:57.593522 | orchestrator |  "on the following host:", 2026-03-16 01:21:57.593531 | orchestrator |  "testbed-manager" 2026-03-16 01:21:57.593539 | orchestrator |  ] 2026-03-16 01:21:57.593552 | orchestrator | } 2026-03-16 01:21:57.593567 | orchestrator | 2026-03-16 01:21:57.593583 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:21:57.593644 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-16 01:21:57.593664 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:21:57.593679 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:21:57.593694 | orchestrator | 2026-03-16 01:21:57.593709 | orchestrator | 2026-03-16 01:21:57.593723 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:21:57.593737 | orchestrator | Monday 16 March 2026 01:21:57 +0000 (0:00:00.752) 0:00:18.267 ********** 2026-03-16 01:21:57.593752 | orchestrator | =============================================================================== 2026-03-16 01:21:57.593768 | orchestrator | Aggregate test results step one ----------------------------------------- 1.75s 2026-03-16 01:21:57.593779 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.61s 2026-03-16 01:21:57.593788 | orchestrator | Write report file ------------------------------------------------------- 1.52s 2026-03-16 01:21:57.593798 | orchestrator | Gather status data ------------------------------------------------------ 1.39s 2026-03-16 01:21:57.593808 | orchestrator | Get container info ------------------------------------------------------ 1.08s 2026-03-16 01:21:57.593818 | orchestrator | Create report output directory ------------------------------------------ 0.95s 2026-03-16 01:21:57.593828 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-03-16 01:21:57.593838 | orchestrator | Print report file information ------------------------------------------- 0.75s 2026-03-16 01:21:57.593849 | orchestrator | Set quorum test data ---------------------------------------------------- 0.52s 2026-03-16 01:21:57.593859 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.48s 2026-03-16 01:21:57.593869 | orchestrator | Set test result to passed if container is existing ---------------------- 0.48s 2026-03-16 01:21:57.593878 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2026-03-16 01:21:57.593898 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-03-16 01:21:57.593909 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2026-03-16 01:21:57.593919 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.31s 2026-03-16 01:21:57.593929 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-03-16 01:21:57.593939 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2026-03-16 01:21:57.593949 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2026-03-16 01:21:57.593959 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-03-16 01:21:57.593969 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-03-16 01:21:57.885673 | orchestrator | + osism validate ceph-mgrs 2026-03-16 01:22:28.819003 | orchestrator | 2026-03-16 01:22:28.819168 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-16 01:22:28.819188 | orchestrator | 2026-03-16 01:22:28.819198 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-16 01:22:28.819208 | orchestrator | Monday 16 March 2026 01:22:14 +0000 (0:00:00.428) 0:00:00.428 ********** 2026-03-16 01:22:28.819217 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:22:28.819226 | orchestrator | 2026-03-16 01:22:28.819235 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-16 01:22:28.819244 | orchestrator | Monday 16 March 2026 01:22:15 +0000 (0:00:00.840) 0:00:01.269 ********** 2026-03-16 01:22:28.819253 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:22:28.819261 | orchestrator | 2026-03-16 01:22:28.819270 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-16 01:22:28.819279 | orchestrator | Monday 16 March 2026 01:22:16 +0000 (0:00:00.998) 0:00:02.267 ********** 2026-03-16 01:22:28.819288 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:22:28.819298 | orchestrator | 2026-03-16 01:22:28.819306 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-16 01:22:28.819315 | orchestrator | Monday 16 March 2026 01:22:16 +0000 (0:00:00.132) 0:00:02.400 ********** 2026-03-16 01:22:28.819324 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:22:28.819333 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:22:28.819341 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:22:28.819349 | orchestrator | 2026-03-16 01:22:28.819357 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-16 01:22:28.819365 | orchestrator | Monday 16 March 2026 01:22:16 +0000 (0:00:00.284) 0:00:02.684 ********** 2026-03-16 01:22:28.819373 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:22:28.819381 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:22:28.819388 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:22:28.819396 | orchestrator | 2026-03-16 01:22:28.819404 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-16 01:22:28.819412 | orchestrator | Monday 16 March 2026 01:22:17 +0000 (0:00:01.049) 0:00:03.733 ********** 2026-03-16 01:22:28.819420 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:22:28.819428 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:22:28.819436 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:22:28.819444 | orchestrator | 2026-03-16 01:22:28.819451 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-16 01:22:28.819459 | orchestrator | Monday 16 March 2026 01:22:18 +0000 (0:00:00.282) 0:00:04.015 ********** 2026-03-16 01:22:28.819467 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:22:28.819475 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:22:28.819483 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:22:28.819497 | orchestrator | 2026-03-16 01:22:28.819509 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-16 01:22:28.819522 | orchestrator | Monday 16 March 2026 01:22:18 +0000 (0:00:00.486) 0:00:04.502 ********** 2026-03-16 01:22:28.819565 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:22:28.819579 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:22:28.819592 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:22:28.819605 | orchestrator | 2026-03-16 01:22:28.819618 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-16 01:22:28.819631 | orchestrator | Monday 16 March 2026 01:22:19 +0000 (0:00:00.297) 0:00:04.800 ********** 2026-03-16 01:22:28.819643 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:22:28.819677 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:22:28.819690 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:22:28.819703 | orchestrator | 2026-03-16 01:22:28.819715 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-16 01:22:28.819728 | orchestrator | Monday 16 March 2026 01:22:19 +0000 (0:00:00.296) 0:00:05.096 ********** 2026-03-16 01:22:28.819740 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:22:28.819753 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:22:28.819765 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:22:28.819777 | orchestrator | 2026-03-16 01:22:28.819790 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-16 01:22:28.819804 | orchestrator | Monday 16 March 2026 01:22:19 +0000 (0:00:00.483) 0:00:05.579 ********** 2026-03-16 01:22:28.819818 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:22:28.819832 | orchestrator | 2026-03-16 01:22:28.819850 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-16 01:22:28.819863 | orchestrator | Monday 16 March 2026 01:22:20 +0000 (0:00:00.254) 0:00:05.834 ********** 2026-03-16 01:22:28.819876 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:22:28.819889 | orchestrator | 2026-03-16 01:22:28.819903 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-16 01:22:28.819916 | orchestrator | Monday 16 March 2026 01:22:20 +0000 (0:00:00.257) 0:00:06.091 ********** 2026-03-16 01:22:28.819929 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:22:28.819942 | orchestrator | 2026-03-16 01:22:28.819955 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:22:28.819968 | orchestrator | Monday 16 March 2026 01:22:20 +0000 (0:00:00.268) 0:00:06.360 ********** 2026-03-16 01:22:28.819982 | orchestrator | 2026-03-16 01:22:28.819994 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:22:28.820007 | orchestrator | Monday 16 March 2026 01:22:20 +0000 (0:00:00.072) 0:00:06.432 ********** 2026-03-16 01:22:28.820019 | orchestrator | 2026-03-16 01:22:28.820032 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:22:28.820044 | orchestrator | Monday 16 March 2026 01:22:20 +0000 (0:00:00.070) 0:00:06.503 ********** 2026-03-16 01:22:28.820057 | orchestrator | 2026-03-16 01:22:28.820069 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-16 01:22:28.820081 | orchestrator | Monday 16 March 2026 01:22:20 +0000 (0:00:00.076) 0:00:06.580 ********** 2026-03-16 01:22:28.820095 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:22:28.820109 | orchestrator | 2026-03-16 01:22:28.820160 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-16 01:22:28.820174 | orchestrator | Monday 16 March 2026 01:22:21 +0000 (0:00:00.245) 0:00:06.825 ********** 2026-03-16 01:22:28.820185 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:22:28.820197 | orchestrator | 2026-03-16 01:22:28.820231 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-16 01:22:28.820243 | orchestrator | Monday 16 March 2026 01:22:21 +0000 (0:00:00.240) 0:00:07.066 ********** 2026-03-16 01:22:28.820256 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:22:28.820270 | orchestrator | 2026-03-16 01:22:28.820283 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-16 01:22:28.820297 | orchestrator | Monday 16 March 2026 01:22:21 +0000 (0:00:00.140) 0:00:07.206 ********** 2026-03-16 01:22:28.820310 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:22:28.820322 | orchestrator | 2026-03-16 01:22:28.820336 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-16 01:22:28.820365 | orchestrator | Monday 16 March 2026 01:22:23 +0000 (0:00:01.970) 0:00:09.177 ********** 2026-03-16 01:22:28.820378 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:22:28.820391 | orchestrator | 2026-03-16 01:22:28.820405 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-16 01:22:28.820419 | orchestrator | Monday 16 March 2026 01:22:23 +0000 (0:00:00.425) 0:00:09.603 ********** 2026-03-16 01:22:28.820431 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:22:28.820444 | orchestrator | 2026-03-16 01:22:28.820457 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-16 01:22:28.820470 | orchestrator | Monday 16 March 2026 01:22:24 +0000 (0:00:00.319) 0:00:09.922 ********** 2026-03-16 01:22:28.820484 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:22:28.820498 | orchestrator | 2026-03-16 01:22:28.820511 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-16 01:22:28.820523 | orchestrator | Monday 16 March 2026 01:22:24 +0000 (0:00:00.153) 0:00:10.076 ********** 2026-03-16 01:22:28.820536 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:22:28.820549 | orchestrator | 2026-03-16 01:22:28.820563 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-16 01:22:28.820590 | orchestrator | Monday 16 March 2026 01:22:24 +0000 (0:00:00.151) 0:00:10.227 ********** 2026-03-16 01:22:28.820615 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:22:28.820629 | orchestrator | 2026-03-16 01:22:28.820643 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-16 01:22:28.820657 | orchestrator | Monday 16 March 2026 01:22:24 +0000 (0:00:00.248) 0:00:10.476 ********** 2026-03-16 01:22:28.820670 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:22:28.820683 | orchestrator | 2026-03-16 01:22:28.820691 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-16 01:22:28.820699 | orchestrator | Monday 16 March 2026 01:22:24 +0000 (0:00:00.245) 0:00:10.721 ********** 2026-03-16 01:22:28.820707 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:22:28.820715 | orchestrator | 2026-03-16 01:22:28.820723 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-16 01:22:28.820731 | orchestrator | Monday 16 March 2026 01:22:26 +0000 (0:00:01.267) 0:00:11.989 ********** 2026-03-16 01:22:28.820814 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:22:28.820825 | orchestrator | 2026-03-16 01:22:28.820833 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-16 01:22:28.820841 | orchestrator | Monday 16 March 2026 01:22:26 +0000 (0:00:00.246) 0:00:12.236 ********** 2026-03-16 01:22:28.820849 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:22:28.820857 | orchestrator | 2026-03-16 01:22:28.820865 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:22:28.820872 | orchestrator | Monday 16 March 2026 01:22:26 +0000 (0:00:00.254) 0:00:12.490 ********** 2026-03-16 01:22:28.820880 | orchestrator | 2026-03-16 01:22:28.820888 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:22:28.820896 | orchestrator | Monday 16 March 2026 01:22:26 +0000 (0:00:00.070) 0:00:12.561 ********** 2026-03-16 01:22:28.820904 | orchestrator | 2026-03-16 01:22:28.820912 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:22:28.820919 | orchestrator | Monday 16 March 2026 01:22:26 +0000 (0:00:00.072) 0:00:12.633 ********** 2026-03-16 01:22:28.820927 | orchestrator | 2026-03-16 01:22:28.820944 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-16 01:22:28.820952 | orchestrator | Monday 16 March 2026 01:22:27 +0000 (0:00:00.236) 0:00:12.870 ********** 2026-03-16 01:22:28.820960 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-16 01:22:28.820967 | orchestrator | 2026-03-16 01:22:28.820975 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-16 01:22:28.820993 | orchestrator | Monday 16 March 2026 01:22:28 +0000 (0:00:01.328) 0:00:14.198 ********** 2026-03-16 01:22:28.821001 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-16 01:22:28.821014 | orchestrator |  "msg": [ 2026-03-16 01:22:28.821029 | orchestrator |  "Validator run completed.", 2026-03-16 01:22:28.821042 | orchestrator |  "You can find the report file here:", 2026-03-16 01:22:28.821055 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-16T01:22:15+00:00-report.json", 2026-03-16 01:22:28.821069 | orchestrator |  "on the following host:", 2026-03-16 01:22:28.821082 | orchestrator |  "testbed-manager" 2026-03-16 01:22:28.821095 | orchestrator |  ] 2026-03-16 01:22:28.821109 | orchestrator | } 2026-03-16 01:22:28.821148 | orchestrator | 2026-03-16 01:22:28.821161 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:22:28.821176 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-16 01:22:28.821191 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:22:28.821221 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:22:29.185047 | orchestrator | 2026-03-16 01:22:29.185141 | orchestrator | 2026-03-16 01:22:29.185151 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:22:29.185159 | orchestrator | Monday 16 March 2026 01:22:28 +0000 (0:00:00.405) 0:00:14.604 ********** 2026-03-16 01:22:29.185166 | orchestrator | =============================================================================== 2026-03-16 01:22:29.185172 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.97s 2026-03-16 01:22:29.185178 | orchestrator | Write report file ------------------------------------------------------- 1.33s 2026-03-16 01:22:29.185184 | orchestrator | Aggregate test results step one ----------------------------------------- 1.27s 2026-03-16 01:22:29.185190 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2026-03-16 01:22:29.185196 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-03-16 01:22:29.185201 | orchestrator | Get timestamp for report file ------------------------------------------- 0.84s 2026-03-16 01:22:29.185207 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2026-03-16 01:22:29.185213 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.48s 2026-03-16 01:22:29.185219 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.43s 2026-03-16 01:22:29.185225 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-03-16 01:22:29.185231 | orchestrator | Flush handlers ---------------------------------------------------------- 0.38s 2026-03-16 01:22:29.185236 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-03-16 01:22:29.185242 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-03-16 01:22:29.185248 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2026-03-16 01:22:29.185254 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-03-16 01:22:29.185259 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2026-03-16 01:22:29.185265 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-03-16 01:22:29.185271 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-03-16 01:22:29.185277 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2026-03-16 01:22:29.185282 | orchestrator | Aggregate test results step one ----------------------------------------- 0.25s 2026-03-16 01:22:29.509627 | orchestrator | + osism validate ceph-osds 2026-03-16 01:22:50.742877 | orchestrator | 2026-03-16 01:22:50.742966 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-16 01:22:50.742977 | orchestrator | 2026-03-16 01:22:50.742984 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-16 01:22:50.742991 | orchestrator | Monday 16 March 2026 01:22:46 +0000 (0:00:00.443) 0:00:00.443 ********** 2026-03-16 01:22:50.742998 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-16 01:22:50.743005 | orchestrator | 2026-03-16 01:22:50.743011 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-16 01:22:50.743017 | orchestrator | Monday 16 March 2026 01:22:47 +0000 (0:00:00.850) 0:00:01.294 ********** 2026-03-16 01:22:50.743023 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-16 01:22:50.743030 | orchestrator | 2026-03-16 01:22:50.743036 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-16 01:22:50.743042 | orchestrator | Monday 16 March 2026 01:22:47 +0000 (0:00:00.508) 0:00:01.803 ********** 2026-03-16 01:22:50.743048 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-16 01:22:50.743054 | orchestrator | 2026-03-16 01:22:50.743061 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-16 01:22:50.743067 | orchestrator | Monday 16 March 2026 01:22:48 +0000 (0:00:00.750) 0:00:02.553 ********** 2026-03-16 01:22:50.743073 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:22:50.743081 | orchestrator | 2026-03-16 01:22:50.743087 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-16 01:22:50.743117 | orchestrator | Monday 16 March 2026 01:22:48 +0000 (0:00:00.125) 0:00:02.679 ********** 2026-03-16 01:22:50.743125 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:22:50.743132 | orchestrator | 2026-03-16 01:22:50.743138 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-16 01:22:50.743145 | orchestrator | Monday 16 March 2026 01:22:48 +0000 (0:00:00.140) 0:00:02.819 ********** 2026-03-16 01:22:50.743151 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:22:50.743158 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:22:50.743164 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:22:50.743170 | orchestrator | 2026-03-16 01:22:50.743177 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-16 01:22:50.743183 | orchestrator | Monday 16 March 2026 01:22:48 +0000 (0:00:00.337) 0:00:03.156 ********** 2026-03-16 01:22:50.743189 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:22:50.743195 | orchestrator | 2026-03-16 01:22:50.743202 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-16 01:22:50.743208 | orchestrator | Monday 16 March 2026 01:22:49 +0000 (0:00:00.157) 0:00:03.314 ********** 2026-03-16 01:22:50.743214 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:22:50.743220 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:22:50.743226 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:22:50.743233 | orchestrator | 2026-03-16 01:22:50.743239 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-16 01:22:50.743245 | orchestrator | Monday 16 March 2026 01:22:49 +0000 (0:00:00.317) 0:00:03.631 ********** 2026-03-16 01:22:50.743252 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:22:50.743258 | orchestrator | 2026-03-16 01:22:50.743265 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-16 01:22:50.743271 | orchestrator | Monday 16 March 2026 01:22:50 +0000 (0:00:00.749) 0:00:04.380 ********** 2026-03-16 01:22:50.743277 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:22:50.743283 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:22:50.743289 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:22:50.743295 | orchestrator | 2026-03-16 01:22:50.743302 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-16 01:22:50.743308 | orchestrator | Monday 16 March 2026 01:22:50 +0000 (0:00:00.334) 0:00:04.715 ********** 2026-03-16 01:22:50.743317 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4008a6820450e11f013457d43a2bf01f5aeb3505c8a395d45d69c673fc334940', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-16 01:22:50.743347 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dd4e8a6fecf1014e2c98840469ea12573373db7140c8a332f1f805489820b1ac', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-16 01:22:50.743354 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e26a6e6b987b48de93ffec7bb49c257a59fb78e2ce5f78bf3c7478af3962e008', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-16 01:22:50.743362 | orchestrator | skipping: [testbed-node-3] => (item={'id': '11407db747f4d0c88de84e002baca38a23a9d789dc3fd39025de04f6bfc3c65b', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-16 01:22:50.743370 | orchestrator | skipping: [testbed-node-3] => (item={'id': '092ed180abab643a1fc46a70599af6a529763a4f914eb6489a3b340062807c27', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-16 01:22:50.743389 | orchestrator | skipping: [testbed-node-3] => (item={'id': '123b02d209c39151dd27f21695684c5f77174dc97eb22573e9bb9855232dc7a6', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-03-16 01:22:50.743396 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5c022fc51be47fc5565c27e3d35ca58f398b740db1f8e9098bcfd3cad2d646e4', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 16 minutes (healthy)'})  2026-03-16 01:22:50.743416 | orchestrator | skipping: [testbed-node-3] => (item={'id': '390b2a2ee42708b575c18086f281f55201b9a7632d9d3681b9f79471ec4abec1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-16 01:22:50.743428 | orchestrator | skipping: [testbed-node-3] => (item={'id': '676bfe3ede790b1d88d5343e7fba4ba3dd5220a255278c37d3595a965408b29f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-16 01:22:50.743435 | orchestrator | skipping: [testbed-node-3] => (item={'id': '74ab2ca5c6e42c855e0834bdbee545190466b414972f41cab5a04fbd3b80cb24', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-16 01:22:50.743489 | orchestrator | ok: [testbed-node-3] => (item={'id': 'aee009525b22a9d5ead81ef0a521a4ba89353f98b1fd6104f5701fe1cf9618b9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-16 01:22:50.743498 | orchestrator | ok: [testbed-node-3] => (item={'id': '3587fc150c606a1b604fdc1929378abec2847aab4cd3840b46aba56245ef24d8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-16 01:22:50.743506 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ff5781b611cc93138bb45dcde76efda5559027eddc9dfd1274dd3c403726f38a', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-16 01:22:50.743513 | orchestrator | skipping: [testbed-node-3] => (item={'id': '42f22f65c0713e6bf3b250659b48387dbd593bbe3db1c8502eee801676d29a21', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-16 01:22:50.743528 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a28c06b96963fa2273423c5c0d2c09fa1ac297999e0ebf139a5bce05e78da5ba', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-16 01:22:50.743536 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5c712de025253365a0c1cb4158d63654d51e9b7b2f5e8a0906f8aaf1b7f1e1ba', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2026-03-16 01:22:50.743543 | orchestrator | skipping: [testbed-node-3] => (item={'id': '40f27d05507d0f5fb8fa6c5f9df04e02a85652c20329a92305c6c212abbac616', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-16 01:22:50.743550 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a48c1141e52f3e852068ea70a053a38f341dfed6bbfb9ace60ff085646b9a17c', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-16 01:22:50.743557 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4eb13788f690b27ec0e2a9f036f59e68862d8e982e0eb73a3a2404191f84c1a3', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-16 01:22:50.743565 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c0d77d85ee1d023cf8a66f33363f41848889f95e2c1179850af379a88e8814fb', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-16 01:22:50.743576 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e6d0551279e00561767702a3f6fafc0799f1f8363805f57df0ddb038256df261', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-16 01:22:50.989591 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2b48310bdbcb329c67fffc2a3a69466c68724143521e55513f2052b40e8ff12b', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-16 01:22:50.989667 | orchestrator | skipping: [testbed-node-4] => (item={'id': '528c7149ff3a6fb717921581129070cf514a166cf89c48a0016b84586358fc4f', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-16 01:22:50.989692 | orchestrator | skipping: [testbed-node-4] => (item={'id': '36df7952ff53947ca49c7a3ef25a34108a2d79a908cdb7982b3623e3e4e76891', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-03-16 01:22:50.989698 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cd80ffa2c946dc1798742d08944204486ef462c82bc29bb92217f66514443094', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 16 minutes (healthy)'})  2026-03-16 01:22:50.989704 | orchestrator | skipping: [testbed-node-4] => (item={'id': '453cf408214d4b932cf08ec1ac91fd40e817050c7477861053979436632a9a70', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-16 01:22:50.989711 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'daa1d8bc03b04ac923e71daaa4756483b75b469e993f7203982c386bcf49a630', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-16 01:22:50.989733 | orchestrator | skipping: [testbed-node-4] => (item={'id': '496eda4b34d07c3612db8b903c83e85f93c2571945a543fe34fa43b15ad7c945', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-16 01:22:50.989740 | orchestrator | ok: [testbed-node-4] => (item={'id': '8ba894acbd1d4927a30d5cf5076b237e5b8a00c1837770beccb2b73cc54e2a34', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-16 01:22:50.989747 | orchestrator | ok: [testbed-node-4] => (item={'id': '81e6526dc8c9b855ef53cb801e2630f1cd70382bde54441eed38e180aa4f5658', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-16 01:22:50.989752 | orchestrator | skipping: [testbed-node-4] => (item={'id': '45b0960f875aadc80978bc2581166e068b6fed2c09c13b5788b799746d7d1198', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-16 01:22:50.989758 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b6c8ffcacbdb96a5323a491ab9c7b2f39868ac20c3018113b8ddd47434afaebd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-16 01:22:50.989764 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9120b125664b9e2cb3dd1b17c846fc82f613f22fceaf039b0430eae812cd1e64', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-16 01:22:50.989770 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4db49ede8c42779406910be28010b24210474e22529c9bac8abe9157ecc96583', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2026-03-16 01:22:50.989776 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9b53609d35be5219aac6908e150025925454d08378118c13d4b5a24eeec8546a', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-16 01:22:50.989793 | orchestrator | skipping: [testbed-node-4] => (item={'id': '14ecb2d2a8e4bf7281d1d3efc6e52ab5a89825ff171776e36c97572296c56b83', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-16 01:22:50.989799 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c17904286f12ca966076d3e025a4f281c8375b851bc3f043ab843f476d09f432', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-16 01:22:50.989806 | orchestrator | skipping: [testbed-node-5] => (item={'id': '837a6da38f9a80f338c979bd009900111701b7303659bb7e3fba885a45d242f6', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-16 01:22:50.989815 | orchestrator | skipping: [testbed-node-5] => (item={'id': '70af9f4b557da0c96cadcd4328ff14dbc23dbaf1fcdc6e9ca945073cf51e5b92', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-16 01:22:50.989821 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ad928a7048061d3395f7327d3f73529e13f01c6b3735b86e8650869c7c48b155', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-16 01:22:50.989826 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5b688347d4dbc1e226c11002c4fd0437756014881bc535bc57a3a8baf314d552', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-16 01:22:50.989836 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2b55d2a9e4ad5bb7e75e328848fc04613634fa2b9cbcc2b217bdd7afa585c35', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-03-16 01:22:50.989842 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eff50315d247569bbca7765d0bf6838b10b831a007461825198a94114649ebfe', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 16 minutes (healthy)'})  2026-03-16 01:22:50.989847 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9a704b6eb19c0d745c82d6b1561f3a7dde76c5e96ccc83ace274da3e704204d0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-16 01:22:50.989853 | orchestrator | skipping: [testbed-node-5] => (item={'id': '47a10746887f3556d7051f211d8a54488e1fa09d259aa21c9443a8dcde170aca', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-16 01:22:50.989859 | orchestrator | skipping: [testbed-node-5] => (item={'id': '63b0724ba1f7a995d405f9e67e06defd79bd9286b787b246f9fb3915a5d93f28', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-16 01:22:50.989864 | orchestrator | ok: [testbed-node-5] => (item={'id': '1d2357d17e871b5eaf0d601b5c15af09ada28202b4535b72157bdf02b889d8dd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-16 01:22:50.989870 | orchestrator | ok: [testbed-node-5] => (item={'id': 'd7b33f8684370626b6aed16f31cca847b43818778c55c1c765af940d4aba395e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-16 01:22:50.989876 | orchestrator | skipping: [testbed-node-5] => (item={'id': '67c0f28d37f74c4ecc315dba991dfd8440c4fb20766cbd5a9ee2ef12dfd037f9', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-16 01:22:50.989882 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dacfa271fa349882258a18b2d344eed44047cbb5535daafeb9c14182bb4ce52f', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-16 01:22:50.989892 | orchestrator | skipping: [testbed-node-5] => (item={'id': '73c44e0ed485a1ac6dff29eef4614fbfd7db8c68c414b6a97f15d937cc96ca87', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-16 01:23:03.352394 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4a994f06e2014b7c3d711897e77e2416b3e6ebbe85a2b6ed579c0aa6ca459b88', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2026-03-16 01:23:03.352473 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dd6366e6399a8fc73a823e95bed52af236c5bbde9c197165d31ae9f2be8e42b3', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-16 01:23:03.352482 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0103cbc4de61f7c216a7aedb4ee0cbc2cc8ff508c3e5017721911b0759a32bba', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-16 01:23:03.352487 | orchestrator | 2026-03-16 01:23:03.352509 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-16 01:23:03.352515 | orchestrator | Monday 16 March 2026 01:22:50 +0000 (0:00:00.485) 0:00:05.200 ********** 2026-03-16 01:23:03.352520 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.352526 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:03.352531 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:03.352536 | orchestrator | 2026-03-16 01:23:03.352541 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-16 01:23:03.352546 | orchestrator | Monday 16 March 2026 01:22:51 +0000 (0:00:00.316) 0:00:05.517 ********** 2026-03-16 01:23:03.352551 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:03.352556 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:23:03.352561 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:23:03.352566 | orchestrator | 2026-03-16 01:23:03.352571 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-16 01:23:03.352576 | orchestrator | Monday 16 March 2026 01:22:51 +0000 (0:00:00.460) 0:00:05.978 ********** 2026-03-16 01:23:03.352580 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.352585 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:03.352590 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:03.352594 | orchestrator | 2026-03-16 01:23:03.352599 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-16 01:23:03.352604 | orchestrator | Monday 16 March 2026 01:22:52 +0000 (0:00:00.312) 0:00:06.291 ********** 2026-03-16 01:23:03.352609 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.352613 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:03.352618 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:03.352622 | orchestrator | 2026-03-16 01:23:03.352627 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-16 01:23:03.352632 | orchestrator | Monday 16 March 2026 01:22:52 +0000 (0:00:00.311) 0:00:06.602 ********** 2026-03-16 01:23:03.352637 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-16 01:23:03.352675 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-16 01:23:03.352681 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:03.352685 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-16 01:23:03.352690 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-16 01:23:03.352695 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:23:03.352700 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-16 01:23:03.352705 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-16 01:23:03.352709 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:23:03.352714 | orchestrator | 2026-03-16 01:23:03.352719 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-16 01:23:03.352723 | orchestrator | Monday 16 March 2026 01:22:52 +0000 (0:00:00.336) 0:00:06.938 ********** 2026-03-16 01:23:03.352728 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.352733 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:03.352737 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:03.352742 | orchestrator | 2026-03-16 01:23:03.352747 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-16 01:23:03.352751 | orchestrator | Monday 16 March 2026 01:22:53 +0000 (0:00:00.505) 0:00:07.444 ********** 2026-03-16 01:23:03.352756 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:03.352761 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:23:03.352765 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:23:03.352770 | orchestrator | 2026-03-16 01:23:03.352775 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-16 01:23:03.352780 | orchestrator | Monday 16 March 2026 01:22:53 +0000 (0:00:00.297) 0:00:07.741 ********** 2026-03-16 01:23:03.352789 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:03.352793 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:23:03.352798 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:23:03.352803 | orchestrator | 2026-03-16 01:23:03.352807 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-16 01:23:03.352812 | orchestrator | Monday 16 March 2026 01:22:53 +0000 (0:00:00.282) 0:00:08.024 ********** 2026-03-16 01:23:03.352817 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.352821 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:03.352826 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:03.352830 | orchestrator | 2026-03-16 01:23:03.352835 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-16 01:23:03.352840 | orchestrator | Monday 16 March 2026 01:22:54 +0000 (0:00:00.313) 0:00:08.337 ********** 2026-03-16 01:23:03.352844 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:03.352849 | orchestrator | 2026-03-16 01:23:03.352865 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-16 01:23:03.352870 | orchestrator | Monday 16 March 2026 01:22:54 +0000 (0:00:00.641) 0:00:08.978 ********** 2026-03-16 01:23:03.352874 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:03.352879 | orchestrator | 2026-03-16 01:23:03.352901 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-16 01:23:03.352905 | orchestrator | Monday 16 March 2026 01:22:55 +0000 (0:00:00.270) 0:00:09.249 ********** 2026-03-16 01:23:03.352910 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:03.352915 | orchestrator | 2026-03-16 01:23:03.352919 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:23:03.352925 | orchestrator | Monday 16 March 2026 01:22:55 +0000 (0:00:00.255) 0:00:09.504 ********** 2026-03-16 01:23:03.352930 | orchestrator | 2026-03-16 01:23:03.352935 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:23:03.352941 | orchestrator | Monday 16 March 2026 01:22:55 +0000 (0:00:00.089) 0:00:09.594 ********** 2026-03-16 01:23:03.352946 | orchestrator | 2026-03-16 01:23:03.352954 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:23:03.352960 | orchestrator | Monday 16 March 2026 01:22:55 +0000 (0:00:00.071) 0:00:09.665 ********** 2026-03-16 01:23:03.352965 | orchestrator | 2026-03-16 01:23:03.352971 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-16 01:23:03.352976 | orchestrator | Monday 16 March 2026 01:22:55 +0000 (0:00:00.068) 0:00:09.734 ********** 2026-03-16 01:23:03.352981 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:03.352986 | orchestrator | 2026-03-16 01:23:03.352992 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-16 01:23:03.352997 | orchestrator | Monday 16 March 2026 01:22:55 +0000 (0:00:00.266) 0:00:10.000 ********** 2026-03-16 01:23:03.353003 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:03.353008 | orchestrator | 2026-03-16 01:23:03.353014 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-16 01:23:03.353019 | orchestrator | Monday 16 March 2026 01:22:56 +0000 (0:00:00.262) 0:00:10.262 ********** 2026-03-16 01:23:03.353024 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.353029 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:03.353035 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:03.353040 | orchestrator | 2026-03-16 01:23:03.353045 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-16 01:23:03.353051 | orchestrator | Monday 16 March 2026 01:22:56 +0000 (0:00:00.292) 0:00:10.555 ********** 2026-03-16 01:23:03.353056 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.353061 | orchestrator | 2026-03-16 01:23:03.353067 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-16 01:23:03.353072 | orchestrator | Monday 16 March 2026 01:22:56 +0000 (0:00:00.589) 0:00:11.145 ********** 2026-03-16 01:23:03.353077 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-16 01:23:03.353104 | orchestrator | 2026-03-16 01:23:03.353113 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-16 01:23:03.353120 | orchestrator | Monday 16 March 2026 01:22:58 +0000 (0:00:01.609) 0:00:12.754 ********** 2026-03-16 01:23:03.353127 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.353134 | orchestrator | 2026-03-16 01:23:03.353142 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-16 01:23:03.353151 | orchestrator | Monday 16 March 2026 01:22:58 +0000 (0:00:00.131) 0:00:12.886 ********** 2026-03-16 01:23:03.353159 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.353167 | orchestrator | 2026-03-16 01:23:03.353174 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-16 01:23:03.353182 | orchestrator | Monday 16 March 2026 01:22:58 +0000 (0:00:00.300) 0:00:13.187 ********** 2026-03-16 01:23:03.353188 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:03.353193 | orchestrator | 2026-03-16 01:23:03.353199 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-16 01:23:03.353204 | orchestrator | Monday 16 March 2026 01:22:59 +0000 (0:00:00.124) 0:00:13.311 ********** 2026-03-16 01:23:03.353209 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.353215 | orchestrator | 2026-03-16 01:23:03.353220 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-16 01:23:03.353226 | orchestrator | Monday 16 March 2026 01:22:59 +0000 (0:00:00.110) 0:00:13.422 ********** 2026-03-16 01:23:03.353231 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.353236 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:03.353241 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:03.353247 | orchestrator | 2026-03-16 01:23:03.353252 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-16 01:23:03.353258 | orchestrator | Monday 16 March 2026 01:22:59 +0000 (0:00:00.278) 0:00:13.700 ********** 2026-03-16 01:23:03.353263 | orchestrator | changed: [testbed-node-3] 2026-03-16 01:23:03.353268 | orchestrator | changed: [testbed-node-4] 2026-03-16 01:23:03.353273 | orchestrator | changed: [testbed-node-5] 2026-03-16 01:23:03.353279 | orchestrator | 2026-03-16 01:23:03.353285 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-16 01:23:03.353290 | orchestrator | Monday 16 March 2026 01:23:01 +0000 (0:00:02.517) 0:00:16.218 ********** 2026-03-16 01:23:03.353296 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.353301 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:03.353306 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:03.353311 | orchestrator | 2026-03-16 01:23:03.353316 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-16 01:23:03.353320 | orchestrator | Monday 16 March 2026 01:23:02 +0000 (0:00:00.493) 0:00:16.711 ********** 2026-03-16 01:23:03.353325 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:03.353330 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:03.353334 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:03.353339 | orchestrator | 2026-03-16 01:23:03.353343 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-16 01:23:03.353348 | orchestrator | Monday 16 March 2026 01:23:03 +0000 (0:00:00.536) 0:00:17.248 ********** 2026-03-16 01:23:03.353352 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:03.353357 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:23:03.353362 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:23:03.353366 | orchestrator | 2026-03-16 01:23:03.353375 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-16 01:23:12.077641 | orchestrator | Monday 16 March 2026 01:23:03 +0000 (0:00:00.322) 0:00:17.571 ********** 2026-03-16 01:23:12.078494 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:12.078569 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:12.078582 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:12.078595 | orchestrator | 2026-03-16 01:23:12.078607 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-16 01:23:12.078619 | orchestrator | Monday 16 March 2026 01:23:03 +0000 (0:00:00.521) 0:00:18.092 ********** 2026-03-16 01:23:12.078660 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:12.078673 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:23:12.078683 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:23:12.078694 | orchestrator | 2026-03-16 01:23:12.078705 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-16 01:23:12.078716 | orchestrator | Monday 16 March 2026 01:23:04 +0000 (0:00:00.285) 0:00:18.378 ********** 2026-03-16 01:23:12.078727 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:12.078738 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:23:12.078763 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:23:12.078774 | orchestrator | 2026-03-16 01:23:12.078785 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-16 01:23:12.078796 | orchestrator | Monday 16 March 2026 01:23:04 +0000 (0:00:00.285) 0:00:18.663 ********** 2026-03-16 01:23:12.078807 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:12.078818 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:12.078829 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:12.078839 | orchestrator | 2026-03-16 01:23:12.078850 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-16 01:23:12.078861 | orchestrator | Monday 16 March 2026 01:23:04 +0000 (0:00:00.488) 0:00:19.152 ********** 2026-03-16 01:23:12.078872 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:12.078883 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:12.078893 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:12.078904 | orchestrator | 2026-03-16 01:23:12.078915 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-16 01:23:12.078925 | orchestrator | Monday 16 March 2026 01:23:05 +0000 (0:00:00.765) 0:00:19.918 ********** 2026-03-16 01:23:12.078936 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:12.078947 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:12.078957 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:12.078968 | orchestrator | 2026-03-16 01:23:12.078979 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-16 01:23:12.078990 | orchestrator | Monday 16 March 2026 01:23:05 +0000 (0:00:00.303) 0:00:20.221 ********** 2026-03-16 01:23:12.079000 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:12.079011 | orchestrator | skipping: [testbed-node-4] 2026-03-16 01:23:12.079022 | orchestrator | skipping: [testbed-node-5] 2026-03-16 01:23:12.079032 | orchestrator | 2026-03-16 01:23:12.079043 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-16 01:23:12.079054 | orchestrator | Monday 16 March 2026 01:23:06 +0000 (0:00:00.311) 0:00:20.533 ********** 2026-03-16 01:23:12.079068 | orchestrator | ok: [testbed-node-3] 2026-03-16 01:23:12.079157 | orchestrator | ok: [testbed-node-4] 2026-03-16 01:23:12.079177 | orchestrator | ok: [testbed-node-5] 2026-03-16 01:23:12.079197 | orchestrator | 2026-03-16 01:23:12.079215 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-16 01:23:12.079232 | orchestrator | Monday 16 March 2026 01:23:06 +0000 (0:00:00.318) 0:00:20.851 ********** 2026-03-16 01:23:12.079244 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-16 01:23:12.079256 | orchestrator | 2026-03-16 01:23:12.079267 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-16 01:23:12.079277 | orchestrator | Monday 16 March 2026 01:23:07 +0000 (0:00:00.706) 0:00:21.558 ********** 2026-03-16 01:23:12.079288 | orchestrator | skipping: [testbed-node-3] 2026-03-16 01:23:12.079299 | orchestrator | 2026-03-16 01:23:12.079310 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-16 01:23:12.079320 | orchestrator | Monday 16 March 2026 01:23:07 +0000 (0:00:00.245) 0:00:21.804 ********** 2026-03-16 01:23:12.079331 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-16 01:23:12.079342 | orchestrator | 2026-03-16 01:23:12.079352 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-16 01:23:12.079363 | orchestrator | Monday 16 March 2026 01:23:09 +0000 (0:00:01.600) 0:00:23.404 ********** 2026-03-16 01:23:12.079383 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-16 01:23:12.079394 | orchestrator | 2026-03-16 01:23:12.079405 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-16 01:23:12.079416 | orchestrator | Monday 16 March 2026 01:23:09 +0000 (0:00:00.264) 0:00:23.669 ********** 2026-03-16 01:23:12.079427 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-16 01:23:12.079438 | orchestrator | 2026-03-16 01:23:12.079448 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:23:12.079459 | orchestrator | Monday 16 March 2026 01:23:09 +0000 (0:00:00.262) 0:00:23.932 ********** 2026-03-16 01:23:12.079470 | orchestrator | 2026-03-16 01:23:12.079481 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:23:12.079491 | orchestrator | Monday 16 March 2026 01:23:09 +0000 (0:00:00.068) 0:00:24.001 ********** 2026-03-16 01:23:12.079502 | orchestrator | 2026-03-16 01:23:12.079513 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-16 01:23:12.079524 | orchestrator | Monday 16 March 2026 01:23:09 +0000 (0:00:00.067) 0:00:24.068 ********** 2026-03-16 01:23:12.079534 | orchestrator | 2026-03-16 01:23:12.079545 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-16 01:23:12.079556 | orchestrator | Monday 16 March 2026 01:23:09 +0000 (0:00:00.074) 0:00:24.142 ********** 2026-03-16 01:23:12.079566 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-16 01:23:12.079577 | orchestrator | 2026-03-16 01:23:12.079589 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-16 01:23:12.079600 | orchestrator | Monday 16 March 2026 01:23:11 +0000 (0:00:01.309) 0:00:25.452 ********** 2026-03-16 01:23:12.079698 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-16 01:23:12.079715 | orchestrator |  "msg": [ 2026-03-16 01:23:12.079727 | orchestrator |  "Validator run completed.", 2026-03-16 01:23:12.079738 | orchestrator |  "You can find the report file here:", 2026-03-16 01:23:12.079749 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-16T01:22:46+00:00-report.json", 2026-03-16 01:23:12.079762 | orchestrator |  "on the following host:", 2026-03-16 01:23:12.079773 | orchestrator |  "testbed-manager" 2026-03-16 01:23:12.079784 | orchestrator |  ] 2026-03-16 01:23:12.079795 | orchestrator | } 2026-03-16 01:23:12.079807 | orchestrator | 2026-03-16 01:23:12.079818 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:23:12.079830 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-16 01:23:12.079850 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-16 01:23:12.079862 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-16 01:23:12.079873 | orchestrator | 2026-03-16 01:23:12.079884 | orchestrator | 2026-03-16 01:23:12.079895 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:23:12.079906 | orchestrator | Monday 16 March 2026 01:23:11 +0000 (0:00:00.551) 0:00:26.003 ********** 2026-03-16 01:23:12.079916 | orchestrator | =============================================================================== 2026-03-16 01:23:12.079927 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.52s 2026-03-16 01:23:12.079973 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.61s 2026-03-16 01:23:12.079986 | orchestrator | Aggregate test results step one ----------------------------------------- 1.60s 2026-03-16 01:23:12.079997 | orchestrator | Write report file ------------------------------------------------------- 1.31s 2026-03-16 01:23:12.080008 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-03-16 01:23:12.080030 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.77s 2026-03-16 01:23:12.080041 | orchestrator | Create report output directory ------------------------------------------ 0.75s 2026-03-16 01:23:12.080052 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.75s 2026-03-16 01:23:12.080063 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.71s 2026-03-16 01:23:12.080106 | orchestrator | Aggregate test results step one ----------------------------------------- 0.64s 2026-03-16 01:23:12.080126 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.59s 2026-03-16 01:23:12.080144 | orchestrator | Print report file information ------------------------------------------- 0.55s 2026-03-16 01:23:12.080161 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.54s 2026-03-16 01:23:12.080180 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.52s 2026-03-16 01:23:12.080200 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.51s 2026-03-16 01:23:12.080219 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.51s 2026-03-16 01:23:12.080238 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.49s 2026-03-16 01:23:12.080250 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-03-16 01:23:12.080261 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.49s 2026-03-16 01:23:12.080272 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.46s 2026-03-16 01:23:12.381424 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-16 01:23:12.387154 | orchestrator | + set -e 2026-03-16 01:23:12.387721 | orchestrator | + source /opt/manager-vars.sh 2026-03-16 01:23:12.387783 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-16 01:23:12.387794 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-16 01:23:12.387802 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-16 01:23:12.387810 | orchestrator | ++ CEPH_VERSION=reef 2026-03-16 01:23:12.387819 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-16 01:23:12.387828 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-16 01:23:12.387836 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-16 01:23:12.387844 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-16 01:23:12.387852 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-16 01:23:12.387860 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-16 01:23:12.387868 | orchestrator | ++ export ARA=false 2026-03-16 01:23:12.387876 | orchestrator | ++ ARA=false 2026-03-16 01:23:12.387884 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-16 01:23:12.387907 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-16 01:23:12.387914 | orchestrator | ++ export TEMPEST=true 2026-03-16 01:23:12.387931 | orchestrator | ++ TEMPEST=true 2026-03-16 01:23:12.387939 | orchestrator | ++ export IS_ZUUL=true 2026-03-16 01:23:12.387946 | orchestrator | ++ IS_ZUUL=true 2026-03-16 01:23:12.387954 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 01:23:12.387962 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 01:23:12.387970 | orchestrator | ++ export EXTERNAL_API=false 2026-03-16 01:23:12.387978 | orchestrator | ++ EXTERNAL_API=false 2026-03-16 01:23:12.387986 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-16 01:23:12.387993 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-16 01:23:12.388001 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-16 01:23:12.388009 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-16 01:23:12.388017 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-16 01:23:12.388024 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-16 01:23:12.388032 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-16 01:23:12.388040 | orchestrator | + source /etc/os-release 2026-03-16 01:23:12.388048 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-16 01:23:12.388056 | orchestrator | ++ NAME=Ubuntu 2026-03-16 01:23:12.388063 | orchestrator | ++ VERSION_ID=24.04 2026-03-16 01:23:12.388071 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-16 01:23:12.388130 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-16 01:23:12.388138 | orchestrator | ++ ID=ubuntu 2026-03-16 01:23:12.388146 | orchestrator | ++ ID_LIKE=debian 2026-03-16 01:23:12.388154 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-16 01:23:12.388162 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-16 01:23:12.388170 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-16 01:23:12.388201 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-16 01:23:12.388210 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-16 01:23:12.388218 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-16 01:23:12.388226 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-16 01:23:12.388236 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-16 01:23:12.388244 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-16 01:23:12.410752 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-16 01:23:34.047161 | orchestrator | 2026-03-16 01:23:34.047238 | orchestrator | # Status of Elasticsearch 2026-03-16 01:23:34.047245 | orchestrator | 2026-03-16 01:23:34.047250 | orchestrator | + pushd /opt/configuration/contrib 2026-03-16 01:23:34.047255 | orchestrator | + echo 2026-03-16 01:23:34.047259 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-16 01:23:34.047263 | orchestrator | + echo 2026-03-16 01:23:34.047268 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-16 01:23:34.215865 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-16 01:23:34.215950 | orchestrator | 2026-03-16 01:23:34.215964 | orchestrator | + echo 2026-03-16 01:23:34.215973 | orchestrator | + echo '# Status of MariaDB' 2026-03-16 01:23:34.216138 | orchestrator | # Status of MariaDB 2026-03-16 01:23:34.216153 | orchestrator | 2026-03-16 01:23:34.216162 | orchestrator | + echo 2026-03-16 01:23:34.216711 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-16 01:23:34.269638 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-16 01:23:34.269737 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-16 01:23:34.269752 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-16 01:23:34.269763 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-16 01:23:34.333237 | orchestrator | Reading package lists... 2026-03-16 01:23:34.661176 | orchestrator | Building dependency tree... 2026-03-16 01:23:34.661601 | orchestrator | Reading state information... 2026-03-16 01:23:35.020992 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-16 01:23:35.021137 | orchestrator | bc set to manually installed. 2026-03-16 01:23:35.021160 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-03-16 01:23:35.647233 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-16 01:23:35.648848 | orchestrator | 2026-03-16 01:23:35.648937 | orchestrator | # Status of Prometheus 2026-03-16 01:23:35.648963 | orchestrator | 2026-03-16 01:23:35.648982 | orchestrator | + echo 2026-03-16 01:23:35.649120 | orchestrator | + echo '# Status of Prometheus' 2026-03-16 01:23:35.649146 | orchestrator | + echo 2026-03-16 01:23:35.649167 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-16 01:23:35.729830 | orchestrator | Unauthorized 2026-03-16 01:23:35.733314 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-16 01:23:35.784559 | orchestrator | Unauthorized 2026-03-16 01:23:35.788094 | orchestrator | 2026-03-16 01:23:35.788182 | orchestrator | # Status of RabbitMQ 2026-03-16 01:23:35.788198 | orchestrator | 2026-03-16 01:23:35.788210 | orchestrator | + echo 2026-03-16 01:23:35.788221 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-16 01:23:35.788232 | orchestrator | + echo 2026-03-16 01:23:35.788243 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-16 01:23:35.839827 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-16 01:23:35.839899 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-16 01:23:35.839908 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-16 01:23:36.289852 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-16 01:23:36.298754 | orchestrator | 2026-03-16 01:23:36.298823 | orchestrator | # Status of Redis 2026-03-16 01:23:36.298832 | orchestrator | 2026-03-16 01:23:36.298839 | orchestrator | + echo 2026-03-16 01:23:36.298847 | orchestrator | + echo '# Status of Redis' 2026-03-16 01:23:36.298854 | orchestrator | + echo 2026-03-16 01:23:36.298862 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-16 01:23:36.303638 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001774s;;;0.000000;10.000000 2026-03-16 01:23:36.303719 | orchestrator | 2026-03-16 01:23:36.303738 | orchestrator | # Create backup of MariaDB database 2026-03-16 01:23:36.303752 | orchestrator | 2026-03-16 01:23:36.303765 | orchestrator | + popd 2026-03-16 01:23:36.303777 | orchestrator | + echo 2026-03-16 01:23:36.303789 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-16 01:23:36.303801 | orchestrator | + echo 2026-03-16 01:23:36.303813 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-16 01:23:38.344203 | orchestrator | 2026-03-16 01:23:38 | INFO  | Task 94cbfb4e-7621-4d69-8ccd-7fa210a0df3c (mariadb_backup) was prepared for execution. 2026-03-16 01:23:38.344283 | orchestrator | 2026-03-16 01:23:38 | INFO  | It takes a moment until task 94cbfb4e-7621-4d69-8ccd-7fa210a0df3c (mariadb_backup) has been started and output is visible here. 2026-03-16 01:24:05.677248 | orchestrator | 2026-03-16 01:24:05.677363 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-16 01:24:05.677379 | orchestrator | 2026-03-16 01:24:05.677392 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-16 01:24:05.677404 | orchestrator | Monday 16 March 2026 01:23:42 +0000 (0:00:00.172) 0:00:00.172 ********** 2026-03-16 01:24:05.677416 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:24:05.677428 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:24:05.677439 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:24:05.677450 | orchestrator | 2026-03-16 01:24:05.677461 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-16 01:24:05.677472 | orchestrator | Monday 16 March 2026 01:23:42 +0000 (0:00:00.332) 0:00:00.504 ********** 2026-03-16 01:24:05.677484 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-16 01:24:05.677496 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-16 01:24:05.677608 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-16 01:24:05.677621 | orchestrator | 2026-03-16 01:24:05.677632 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-16 01:24:05.677643 | orchestrator | 2026-03-16 01:24:05.677654 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-16 01:24:05.677666 | orchestrator | Monday 16 March 2026 01:23:43 +0000 (0:00:00.571) 0:00:01.076 ********** 2026-03-16 01:24:05.677677 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-16 01:24:05.677688 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-16 01:24:05.677699 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-16 01:24:05.677710 | orchestrator | 2026-03-16 01:24:05.677722 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-16 01:24:05.677733 | orchestrator | Monday 16 March 2026 01:23:43 +0000 (0:00:00.400) 0:00:01.477 ********** 2026-03-16 01:24:05.677745 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-16 01:24:05.677758 | orchestrator | 2026-03-16 01:24:05.677771 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-16 01:24:05.677784 | orchestrator | Monday 16 March 2026 01:23:44 +0000 (0:00:00.593) 0:00:02.070 ********** 2026-03-16 01:24:05.677797 | orchestrator | ok: [testbed-node-0] 2026-03-16 01:24:05.677810 | orchestrator | ok: [testbed-node-1] 2026-03-16 01:24:05.677822 | orchestrator | ok: [testbed-node-2] 2026-03-16 01:24:05.677835 | orchestrator | 2026-03-16 01:24:05.677848 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-16 01:24:05.677861 | orchestrator | Monday 16 March 2026 01:23:47 +0000 (0:00:03.144) 0:00:05.214 ********** 2026-03-16 01:24:05.677873 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-16 01:24:05.677905 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-16 01:24:05.677943 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-16 01:24:05.677955 | orchestrator | mariadb_bootstrap_restart 2026-03-16 01:24:05.677966 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:24:05.678002 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:24:05.678101 | orchestrator | changed: [testbed-node-0] 2026-03-16 01:24:05.678122 | orchestrator | 2026-03-16 01:24:05.678141 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-16 01:24:05.678158 | orchestrator | skipping: no hosts matched 2026-03-16 01:24:05.678175 | orchestrator | 2026-03-16 01:24:05.678193 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-16 01:24:05.678291 | orchestrator | skipping: no hosts matched 2026-03-16 01:24:05.678311 | orchestrator | 2026-03-16 01:24:05.678323 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-16 01:24:05.678334 | orchestrator | skipping: no hosts matched 2026-03-16 01:24:05.678344 | orchestrator | 2026-03-16 01:24:05.678358 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-16 01:24:05.678377 | orchestrator | 2026-03-16 01:24:05.678402 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-16 01:24:05.678422 | orchestrator | Monday 16 March 2026 01:24:04 +0000 (0:00:17.133) 0:00:22.348 ********** 2026-03-16 01:24:05.678439 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:24:05.678455 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:24:05.678472 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:24:05.678490 | orchestrator | 2026-03-16 01:24:05.678506 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-16 01:24:05.678523 | orchestrator | Monday 16 March 2026 01:24:04 +0000 (0:00:00.321) 0:00:22.669 ********** 2026-03-16 01:24:05.678542 | orchestrator | skipping: [testbed-node-0] 2026-03-16 01:24:05.678560 | orchestrator | skipping: [testbed-node-1] 2026-03-16 01:24:05.678578 | orchestrator | skipping: [testbed-node-2] 2026-03-16 01:24:05.678595 | orchestrator | 2026-03-16 01:24:05.678614 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:24:05.678635 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-16 01:24:05.678654 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-16 01:24:05.678666 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-16 01:24:05.678677 | orchestrator | 2026-03-16 01:24:05.678687 | orchestrator | 2026-03-16 01:24:05.678698 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:24:05.678710 | orchestrator | Monday 16 March 2026 01:24:05 +0000 (0:00:00.396) 0:00:23.066 ********** 2026-03-16 01:24:05.678721 | orchestrator | =============================================================================== 2026-03-16 01:24:05.678732 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.13s 2026-03-16 01:24:05.678768 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.14s 2026-03-16 01:24:05.678783 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.59s 2026-03-16 01:24:05.678800 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-03-16 01:24:05.678818 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2026-03-16 01:24:05.678836 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.40s 2026-03-16 01:24:05.678855 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-16 01:24:05.678872 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-03-16 01:24:05.986808 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-16 01:24:06.004957 | orchestrator | 2026-03-16 01:24:06.005105 | orchestrator | # OpenStack endpoints 2026-03-16 01:24:06.005119 | orchestrator | 2026-03-16 01:24:06.005129 | orchestrator | + set -e 2026-03-16 01:24:06.005141 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-16 01:24:06.005151 | orchestrator | ++ export INTERACTIVE=false 2026-03-16 01:24:06.005162 | orchestrator | ++ INTERACTIVE=false 2026-03-16 01:24:06.005171 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-16 01:24:06.005181 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-16 01:24:06.005190 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-16 01:24:06.005200 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-16 01:24:06.005211 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-16 01:24:06.005220 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-16 01:24:06.005230 | orchestrator | + export OS_CLOUD=admin 2026-03-16 01:24:06.005240 | orchestrator | + OS_CLOUD=admin 2026-03-16 01:24:06.005250 | orchestrator | + echo 2026-03-16 01:24:06.005260 | orchestrator | + echo '# OpenStack endpoints' 2026-03-16 01:24:06.005269 | orchestrator | + echo 2026-03-16 01:24:06.005278 | orchestrator | + openstack endpoint list 2026-03-16 01:24:09.295544 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-16 01:24:09.295615 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-16 01:24:09.295621 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-16 01:24:09.295626 | orchestrator | | 1e58fad4273f4e258e8a467e3cf8c6d2 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-16 01:24:09.295642 | orchestrator | | 2aa8d71a341e4e2ab491a007d4df537f | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-16 01:24:09.295646 | orchestrator | | 36b6cf81a6c242c59dc2a251b76e743d | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-16 01:24:09.295650 | orchestrator | | 451f6ed134c14e5b9aa8340ee088574f | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-16 01:24:09.295653 | orchestrator | | 59b2735de97b488ea110f1141139111d | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-16 01:24:09.295657 | orchestrator | | 658b223296f6413fa0d411968441e4b2 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-16 01:24:09.295661 | orchestrator | | 6f2180080dc54cc3a7274d460c2021f0 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-16 01:24:09.295665 | orchestrator | | 6f71a262d8984a9c91191cdad345ffe6 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-16 01:24:09.295669 | orchestrator | | 783ddcf14cac4e22b8126aa961ffc82f | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-16 01:24:09.295673 | orchestrator | | 784acd380d844c9eaf488fc8259b0378 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-16 01:24:09.295677 | orchestrator | | 79897c6d83b140e2a7ca4443076025b0 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-16 01:24:09.295681 | orchestrator | | 91ba62217df6455682751d736d7d1aa8 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-16 01:24:09.295697 | orchestrator | | 9889fc837aff4df7b3cb81c0d9f4d588 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-16 01:24:09.295701 | orchestrator | | ad0fbc6c44d24ade894ae1bee5389acf | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-16 01:24:09.295705 | orchestrator | | b6c3063461bb4821b69c8f6cd1058371 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-16 01:24:09.295708 | orchestrator | | bf742d5093a14ae78352ea1cd3a99424 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-16 01:24:09.295712 | orchestrator | | c207ba3de28a4316855f3b06cf82069a | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-16 01:24:09.295716 | orchestrator | | c28776c4dfcb430a9a954c8dae1e79f3 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-16 01:24:09.295720 | orchestrator | | d71fabc6d17d4466a16bd9914516bdcb | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-16 01:24:09.295724 | orchestrator | | edde986873044e148a382f005d5f950e | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-16 01:24:09.295736 | orchestrator | | f8cbd7415a114bcab5cc508a1ad8bbdb | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-16 01:24:09.295740 | orchestrator | | fb67fcf94b404fb6b589d1159911f83a | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-16 01:24:09.295744 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-16 01:24:09.536885 | orchestrator | 2026-03-16 01:24:09.536964 | orchestrator | # Cinder 2026-03-16 01:24:09.536997 | orchestrator | 2026-03-16 01:24:09.537010 | orchestrator | + echo 2026-03-16 01:24:09.537021 | orchestrator | + echo '# Cinder' 2026-03-16 01:24:09.537032 | orchestrator | + echo 2026-03-16 01:24:09.537044 | orchestrator | + openstack volume service list 2026-03-16 01:24:13.160877 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-16 01:24:13.161013 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-16 01:24:13.161043 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-16 01:24:13.161051 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-16T01:24:04.000000 | 2026-03-16 01:24:13.161058 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-16T01:24:04.000000 | 2026-03-16 01:24:13.161065 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-16T01:24:04.000000 | 2026-03-16 01:24:13.161071 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-16T01:24:04.000000 | 2026-03-16 01:24:13.161078 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-16T01:24:10.000000 | 2026-03-16 01:24:13.161085 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-16T01:24:11.000000 | 2026-03-16 01:24:13.161091 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-16T01:24:03.000000 | 2026-03-16 01:24:13.161098 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-16T01:24:05.000000 | 2026-03-16 01:24:13.161105 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-16T01:24:06.000000 | 2026-03-16 01:24:13.161129 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-16 01:24:13.400781 | orchestrator | 2026-03-16 01:24:13.400877 | orchestrator | # Neutron 2026-03-16 01:24:13.400898 | orchestrator | 2026-03-16 01:24:13.400913 | orchestrator | + echo 2026-03-16 01:24:13.400925 | orchestrator | + echo '# Neutron' 2026-03-16 01:24:13.400936 | orchestrator | + echo 2026-03-16 01:24:13.400944 | orchestrator | + openstack network agent list 2026-03-16 01:24:16.173420 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-16 01:24:16.173496 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-16 01:24:16.173505 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-16 01:24:16.173511 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-16 01:24:16.173516 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-16 01:24:16.173522 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-16 01:24:16.173527 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-16 01:24:16.173532 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-16 01:24:16.173537 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-16 01:24:16.173542 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-16 01:24:16.173547 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-16 01:24:16.173552 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-16 01:24:16.173557 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-16 01:24:16.413894 | orchestrator | + openstack network service provider list 2026-03-16 01:24:18.937455 | orchestrator | +---------------+------+---------+ 2026-03-16 01:24:18.937558 | orchestrator | | Service Type | Name | Default | 2026-03-16 01:24:18.937577 | orchestrator | +---------------+------+---------+ 2026-03-16 01:24:18.937591 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-16 01:24:18.937604 | orchestrator | +---------------+------+---------+ 2026-03-16 01:24:19.189366 | orchestrator | 2026-03-16 01:24:19.189456 | orchestrator | # Nova 2026-03-16 01:24:19.189475 | orchestrator | 2026-03-16 01:24:19.189487 | orchestrator | + echo 2026-03-16 01:24:19.189499 | orchestrator | + echo '# Nova' 2026-03-16 01:24:19.189512 | orchestrator | + echo 2026-03-16 01:24:19.189524 | orchestrator | + openstack compute service list 2026-03-16 01:24:22.019436 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-16 01:24:22.019544 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-16 01:24:22.019555 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-16 01:24:22.019563 | orchestrator | | cd47bebf-b65c-4fd4-bef6-b6d8796db39b | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-16T01:24:18.000000 | 2026-03-16 01:24:22.019606 | orchestrator | | 1cf4d504-e703-4e86-b6fc-a5ab7031dc0f | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-16T01:24:21.000000 | 2026-03-16 01:24:22.019615 | orchestrator | | 193659a4-b377-429c-bb13-e6748bc0c176 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-16T01:24:21.000000 | 2026-03-16 01:24:22.019622 | orchestrator | | cf0002e4-cae9-48c8-a24a-1d31e7dd95ff | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-16T01:24:18.000000 | 2026-03-16 01:24:22.019629 | orchestrator | | 5ee6da73-728a-47fa-9a4c-ac857e50785e | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-16T01:24:19.000000 | 2026-03-16 01:24:22.019637 | orchestrator | | 04b6b506-294e-42d6-b85a-8b5ad7d6210f | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-16T01:24:19.000000 | 2026-03-16 01:24:22.019644 | orchestrator | | c20a30b8-6cd5-4d87-a7a4-d22c2a5adf07 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-16T01:24:12.000000 | 2026-03-16 01:24:22.019651 | orchestrator | | 4d350fb6-69ac-4ed3-bb63-235470e284f2 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-16T01:24:13.000000 | 2026-03-16 01:24:22.019658 | orchestrator | | 4623041a-1b8b-4606-b565-9018fdde6e0e | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-16T01:24:13.000000 | 2026-03-16 01:24:22.019666 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-16 01:24:22.266779 | orchestrator | + openstack hypervisor list 2026-03-16 01:24:25.448167 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-16 01:24:25.448257 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-16 01:24:25.448267 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-16 01:24:25.448275 | orchestrator | | 7498cfa7-6cd1-4dd9-bea9-36251a22885a | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-16 01:24:25.448281 | orchestrator | | f2b8d3e4-6aed-4c0d-9e87-8e8ab047d471 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-16 01:24:25.448287 | orchestrator | | d1a11e56-6c12-49ff-91b0-4b657b843ee1 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-16 01:24:25.448293 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-16 01:24:25.676988 | orchestrator | 2026-03-16 01:24:25.677082 | orchestrator | # Run OpenStack test play 2026-03-16 01:24:25.677099 | orchestrator | 2026-03-16 01:24:25.677110 | orchestrator | + echo 2026-03-16 01:24:25.677122 | orchestrator | + echo '# Run OpenStack test play' 2026-03-16 01:24:25.677133 | orchestrator | + echo 2026-03-16 01:24:25.677144 | orchestrator | + osism apply --environment openstack test 2026-03-16 01:24:27.673754 | orchestrator | 2026-03-16 01:24:27 | INFO  | Trying to run play test in environment openstack 2026-03-16 01:24:37.781411 | orchestrator | 2026-03-16 01:24:37 | INFO  | Task 453d3924-c8fa-4a07-9c25-cc6ae58a0873 (test) was prepared for execution. 2026-03-16 01:24:37.781522 | orchestrator | 2026-03-16 01:24:37 | INFO  | It takes a moment until task 453d3924-c8fa-4a07-9c25-cc6ae58a0873 (test) has been started and output is visible here. 2026-03-16 01:27:23.056925 | orchestrator | 2026-03-16 01:27:23.057749 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-16 01:27:23.057779 | orchestrator | 2026-03-16 01:27:23.057785 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-16 01:27:23.057791 | orchestrator | Monday 16 March 2026 01:24:41 +0000 (0:00:00.069) 0:00:00.069 ********** 2026-03-16 01:27:23.057796 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.057802 | orchestrator | 2026-03-16 01:27:23.057808 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-16 01:27:23.057813 | orchestrator | Monday 16 March 2026 01:24:45 +0000 (0:00:03.564) 0:00:03.633 ********** 2026-03-16 01:27:23.057818 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.057839 | orchestrator | 2026-03-16 01:27:23.057845 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-16 01:27:23.057850 | orchestrator | Monday 16 March 2026 01:24:49 +0000 (0:00:04.062) 0:00:07.696 ********** 2026-03-16 01:27:23.057855 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.057860 | orchestrator | 2026-03-16 01:27:23.057865 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-16 01:27:23.057870 | orchestrator | Monday 16 March 2026 01:24:55 +0000 (0:00:06.283) 0:00:13.980 ********** 2026-03-16 01:27:23.057874 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.057880 | orchestrator | 2026-03-16 01:27:23.057885 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-16 01:27:23.057889 | orchestrator | Monday 16 March 2026 01:24:59 +0000 (0:00:04.033) 0:00:18.013 ********** 2026-03-16 01:27:23.057893 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.057897 | orchestrator | 2026-03-16 01:27:23.057902 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-16 01:27:23.057906 | orchestrator | Monday 16 March 2026 01:25:04 +0000 (0:00:04.237) 0:00:22.251 ********** 2026-03-16 01:27:23.057910 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-16 01:27:23.057915 | orchestrator | changed: [localhost] => (item=member) 2026-03-16 01:27:23.057919 | orchestrator | changed: [localhost] => (item=creator) 2026-03-16 01:27:23.057923 | orchestrator | 2026-03-16 01:27:23.057928 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-16 01:27:23.057932 | orchestrator | Monday 16 March 2026 01:25:15 +0000 (0:00:11.319) 0:00:33.571 ********** 2026-03-16 01:27:23.057936 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.057940 | orchestrator | 2026-03-16 01:27:23.057944 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-16 01:27:23.057958 | orchestrator | Monday 16 March 2026 01:25:19 +0000 (0:00:04.226) 0:00:37.797 ********** 2026-03-16 01:27:23.057962 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.057966 | orchestrator | 2026-03-16 01:27:23.057970 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-16 01:27:23.057974 | orchestrator | Monday 16 March 2026 01:25:24 +0000 (0:00:04.709) 0:00:42.507 ********** 2026-03-16 01:27:23.057978 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.057982 | orchestrator | 2026-03-16 01:27:23.057986 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-16 01:27:23.057991 | orchestrator | Monday 16 March 2026 01:25:28 +0000 (0:00:04.297) 0:00:46.805 ********** 2026-03-16 01:27:23.057995 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.057999 | orchestrator | 2026-03-16 01:27:23.058003 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-16 01:27:23.058007 | orchestrator | Monday 16 March 2026 01:25:32 +0000 (0:00:04.186) 0:00:50.992 ********** 2026-03-16 01:27:23.058011 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.058047 | orchestrator | 2026-03-16 01:27:23.058051 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-16 01:27:23.058055 | orchestrator | Monday 16 March 2026 01:25:36 +0000 (0:00:04.192) 0:00:55.184 ********** 2026-03-16 01:27:23.058060 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.058064 | orchestrator | 2026-03-16 01:27:23.058068 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-16 01:27:23.058072 | orchestrator | Monday 16 March 2026 01:25:40 +0000 (0:00:03.745) 0:00:58.929 ********** 2026-03-16 01:27:23.058076 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.058080 | orchestrator | 2026-03-16 01:27:23.058085 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-16 01:27:23.058089 | orchestrator | Monday 16 March 2026 01:25:45 +0000 (0:00:05.032) 0:01:03.962 ********** 2026-03-16 01:27:23.058093 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.058097 | orchestrator | 2026-03-16 01:27:23.058101 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-16 01:27:23.058105 | orchestrator | Monday 16 March 2026 01:25:51 +0000 (0:00:06.025) 0:01:09.988 ********** 2026-03-16 01:27:23.058113 | orchestrator | changed: [localhost] 2026-03-16 01:27:23.058118 | orchestrator | 2026-03-16 01:27:23.058122 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-16 01:27:23.058126 | orchestrator | 2026-03-16 01:27:23.058130 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-16 01:27:23.058134 | orchestrator | Monday 16 March 2026 01:26:03 +0000 (0:00:11.230) 0:01:21.218 ********** 2026-03-16 01:27:23.058139 | orchestrator | ok: [localhost] 2026-03-16 01:27:23.058146 | orchestrator | 2026-03-16 01:27:23.058151 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-16 01:27:23.058155 | orchestrator | Monday 16 March 2026 01:26:06 +0000 (0:00:03.572) 0:01:24.791 ********** 2026-03-16 01:27:23.058159 | orchestrator | skipping: [localhost] 2026-03-16 01:27:23.058163 | orchestrator | 2026-03-16 01:27:23.058167 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-16 01:27:23.058171 | orchestrator | Monday 16 March 2026 01:26:06 +0000 (0:00:00.048) 0:01:24.840 ********** 2026-03-16 01:27:23.058175 | orchestrator | skipping: [localhost] 2026-03-16 01:27:23.058179 | orchestrator | 2026-03-16 01:27:23.058184 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-16 01:27:23.058188 | orchestrator | Monday 16 March 2026 01:26:06 +0000 (0:00:00.046) 0:01:24.886 ********** 2026-03-16 01:27:23.058192 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-16 01:27:23.058209 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-16 01:27:23.058238 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-16 01:27:23.058243 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-16 01:27:23.058247 | orchestrator | skipping: [localhost] => (item=test)  2026-03-16 01:27:23.058251 | orchestrator | skipping: [localhost] 2026-03-16 01:27:23.058255 | orchestrator | 2026-03-16 01:27:23.058260 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-16 01:27:23.058264 | orchestrator | Monday 16 March 2026 01:26:06 +0000 (0:00:00.166) 0:01:25.053 ********** 2026-03-16 01:27:23.058268 | orchestrator | skipping: [localhost] 2026-03-16 01:27:23.058272 | orchestrator | 2026-03-16 01:27:23.058276 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-16 01:27:23.058280 | orchestrator | Monday 16 March 2026 01:26:07 +0000 (0:00:00.160) 0:01:25.214 ********** 2026-03-16 01:27:23.058284 | orchestrator | changed: [localhost] => (item=test) 2026-03-16 01:27:23.058289 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-16 01:27:23.058293 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-16 01:27:23.058297 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-16 01:27:23.058301 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-16 01:27:23.058305 | orchestrator | 2026-03-16 01:27:23.058309 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-16 01:27:23.058313 | orchestrator | Monday 16 March 2026 01:26:11 +0000 (0:00:04.748) 0:01:29.962 ********** 2026-03-16 01:27:23.058318 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-16 01:27:23.058323 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-16 01:27:23.058327 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-16 01:27:23.058331 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-16 01:27:23.058337 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j991556894124.2645', 'results_file': '/ansible/.ansible_async/j991556894124.2645', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-16 01:27:23.058347 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j931343327692.2670', 'results_file': '/ansible/.ansible_async/j931343327692.2670', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-16 01:27:23.058355 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-16 01:27:23.058359 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j882616144079.2695', 'results_file': '/ansible/.ansible_async/j882616144079.2695', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-16 01:27:23.058363 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j860268728262.2720', 'results_file': '/ansible/.ansible_async/j860268728262.2720', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-16 01:27:23.058367 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j554838770943.2745', 'results_file': '/ansible/.ansible_async/j554838770943.2745', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-16 01:27:23.058372 | orchestrator | 2026-03-16 01:27:23.058376 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-16 01:27:23.058380 | orchestrator | Monday 16 March 2026 01:27:09 +0000 (0:00:57.341) 0:02:27.304 ********** 2026-03-16 01:27:23.058384 | orchestrator | changed: [localhost] => (item=test) 2026-03-16 01:27:23.058388 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-16 01:27:23.058393 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-16 01:27:23.058397 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-16 01:27:23.058401 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-16 01:27:23.058405 | orchestrator | 2026-03-16 01:27:23.058409 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-16 01:27:23.058413 | orchestrator | Monday 16 March 2026 01:27:13 +0000 (0:00:04.480) 0:02:31.785 ********** 2026-03-16 01:27:23.058431 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-16 01:27:23.058436 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j93515328031.2856', 'results_file': '/ansible/.ansible_async/j93515328031.2856', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-16 01:27:23.058440 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j38794746703.2881', 'results_file': '/ansible/.ansible_async/j38794746703.2881', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-16 01:27:23.058445 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j976497059521.2906', 'results_file': '/ansible/.ansible_async/j976497059521.2906', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-16 01:27:23.058452 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j871684183102.2931', 'results_file': '/ansible/.ansible_async/j871684183102.2931', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-16 01:28:02.358597 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j275476517412.2956', 'results_file': '/ansible/.ansible_async/j275476517412.2956', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-16 01:28:02.358740 | orchestrator | 2026-03-16 01:28:02.358772 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-16 01:28:02.358795 | orchestrator | Monday 16 March 2026 01:27:23 +0000 (0:00:09.481) 0:02:41.266 ********** 2026-03-16 01:28:02.358816 | orchestrator | changed: [localhost] => (item=test) 2026-03-16 01:28:02.358836 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-16 01:28:02.358850 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-16 01:28:02.358861 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-16 01:28:02.358872 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-16 01:28:02.358909 | orchestrator | 2026-03-16 01:28:02.358921 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-16 01:28:02.359048 | orchestrator | Monday 16 March 2026 01:27:27 +0000 (0:00:04.566) 0:02:45.832 ********** 2026-03-16 01:28:02.359068 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-16 01:28:02.359088 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j272839134675.3025', 'results_file': '/ansible/.ansible_async/j272839134675.3025', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-16 01:28:02.359107 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j873950403768.3050', 'results_file': '/ansible/.ansible_async/j873950403768.3050', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-16 01:28:02.359146 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j87229693215.3076', 'results_file': '/ansible/.ansible_async/j87229693215.3076', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-16 01:28:02.359165 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j500414657577.3102', 'results_file': '/ansible/.ansible_async/j500414657577.3102', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-16 01:28:02.359182 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j837913066675.3128', 'results_file': '/ansible/.ansible_async/j837913066675.3128', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-16 01:28:02.359200 | orchestrator | 2026-03-16 01:28:02.359217 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-16 01:28:02.359235 | orchestrator | Monday 16 March 2026 01:27:37 +0000 (0:00:09.527) 0:02:55.359 ********** 2026-03-16 01:28:02.359253 | orchestrator | changed: [localhost] 2026-03-16 01:28:02.359271 | orchestrator | 2026-03-16 01:28:02.359289 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-16 01:28:02.359307 | orchestrator | Monday 16 March 2026 01:27:43 +0000 (0:00:06.276) 0:03:01.636 ********** 2026-03-16 01:28:02.359324 | orchestrator | changed: [localhost] 2026-03-16 01:28:02.359342 | orchestrator | 2026-03-16 01:28:02.359360 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-16 01:28:02.359378 | orchestrator | Monday 16 March 2026 01:27:56 +0000 (0:00:13.512) 0:03:15.149 ********** 2026-03-16 01:28:02.359397 | orchestrator | ok: [localhost] 2026-03-16 01:28:02.359417 | orchestrator | 2026-03-16 01:28:02.359438 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-16 01:28:02.359457 | orchestrator | Monday 16 March 2026 01:28:02 +0000 (0:00:05.120) 0:03:20.269 ********** 2026-03-16 01:28:02.359476 | orchestrator | ok: [localhost] => { 2026-03-16 01:28:02.359495 | orchestrator |  "msg": "192.168.112.118" 2026-03-16 01:28:02.359516 | orchestrator | } 2026-03-16 01:28:02.359535 | orchestrator | 2026-03-16 01:28:02.359554 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:28:02.359573 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-16 01:28:02.359593 | orchestrator | 2026-03-16 01:28:02.359611 | orchestrator | 2026-03-16 01:28:02.359628 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:28:02.359646 | orchestrator | Monday 16 March 2026 01:28:02 +0000 (0:00:00.052) 0:03:20.321 ********** 2026-03-16 01:28:02.359663 | orchestrator | =============================================================================== 2026-03-16 01:28:02.359681 | orchestrator | Wait for instance creation to complete --------------------------------- 57.34s 2026-03-16 01:28:02.359699 | orchestrator | Attach test volume ----------------------------------------------------- 13.51s 2026-03-16 01:28:02.359717 | orchestrator | Add member roles to user test ------------------------------------------ 11.32s 2026-03-16 01:28:02.359754 | orchestrator | Create test router ----------------------------------------------------- 11.23s 2026-03-16 01:28:02.359772 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.53s 2026-03-16 01:28:02.359791 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.48s 2026-03-16 01:28:02.359809 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.28s 2026-03-16 01:28:02.359855 | orchestrator | Create test volume ------------------------------------------------------ 6.28s 2026-03-16 01:28:02.359875 | orchestrator | Create test subnet ------------------------------------------------------ 6.03s 2026-03-16 01:28:02.359893 | orchestrator | Create floating ip address ---------------------------------------------- 5.12s 2026-03-16 01:28:02.359912 | orchestrator | Create test network ----------------------------------------------------- 5.03s 2026-03-16 01:28:02.359961 | orchestrator | Create test instances --------------------------------------------------- 4.75s 2026-03-16 01:28:02.359979 | orchestrator | Create ssh security group ----------------------------------------------- 4.71s 2026-03-16 01:28:02.359997 | orchestrator | Add tag to instances ---------------------------------------------------- 4.57s 2026-03-16 01:28:02.360015 | orchestrator | Add metadata to instances ----------------------------------------------- 4.48s 2026-03-16 01:28:02.360034 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.30s 2026-03-16 01:28:02.360053 | orchestrator | Create test user -------------------------------------------------------- 4.24s 2026-03-16 01:28:02.360073 | orchestrator | Create test server group ------------------------------------------------ 4.23s 2026-03-16 01:28:02.360092 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.19s 2026-03-16 01:28:02.360112 | orchestrator | Create icmp security group ---------------------------------------------- 4.19s 2026-03-16 01:28:02.648511 | orchestrator | + server_list 2026-03-16 01:28:02.648605 | orchestrator | + openstack --os-cloud test server list 2026-03-16 01:28:06.433311 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-16 01:28:06.433406 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-16 01:28:06.433421 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-16 01:28:06.433453 | orchestrator | | 92494364-be72-4c60-adbf-690d5eb3aa6f | test-4 | ACTIVE | test=192.168.112.130, 192.168.200.229 | N/A (booted from volume) | SCS-1L-1 | 2026-03-16 01:28:06.433465 | orchestrator | | 03f92a83-1629-4e1b-903e-ab8e98439f7e | test-3 | ACTIVE | test=192.168.112.163, 192.168.200.220 | N/A (booted from volume) | SCS-1L-1 | 2026-03-16 01:28:06.433477 | orchestrator | | a4b55e5e-f178-4874-b74d-5c76d1e893c6 | test-1 | ACTIVE | test=192.168.112.167, 192.168.200.243 | N/A (booted from volume) | SCS-1L-1 | 2026-03-16 01:28:06.433488 | orchestrator | | e93f9cf0-2372-4fa3-91c1-2bce80c9218c | test-2 | ACTIVE | test=192.168.112.196, 192.168.200.35 | N/A (booted from volume) | SCS-1L-1 | 2026-03-16 01:28:06.433499 | orchestrator | | 3a3ae2c4-7a76-4592-afcc-c0337638371a | test | ACTIVE | test=192.168.112.118, 192.168.200.83 | N/A (booted from volume) | SCS-1L-1 | 2026-03-16 01:28:06.433510 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-16 01:28:06.696976 | orchestrator | + openstack --os-cloud test server show test 2026-03-16 01:28:10.221620 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:10.221703 | orchestrator | | Field | Value | 2026-03-16 01:28:10.221724 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:10.221729 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-16 01:28:10.221734 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-16 01:28:10.221739 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-16 01:28:10.221743 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-16 01:28:10.221748 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-16 01:28:10.221753 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-16 01:28:10.221767 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-16 01:28:10.221772 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-16 01:28:10.221786 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-16 01:28:10.221791 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-16 01:28:10.221795 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-16 01:28:10.221800 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-16 01:28:10.221805 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-16 01:28:10.221810 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-16 01:28:10.221817 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-16 01:28:10.221822 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-16T01:26:44.000000 | 2026-03-16 01:28:10.221830 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-16 01:28:10.221841 | orchestrator | | accessIPv4 | | 2026-03-16 01:28:10.221846 | orchestrator | | accessIPv6 | | 2026-03-16 01:28:10.221850 | orchestrator | | addresses | test=192.168.112.118, 192.168.200.83 | 2026-03-16 01:28:10.221855 | orchestrator | | config_drive | | 2026-03-16 01:28:10.221859 | orchestrator | | created | 2026-03-16T01:26:16Z | 2026-03-16 01:28:10.221864 | orchestrator | | description | None | 2026-03-16 01:28:10.221869 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-16 01:28:10.221876 | orchestrator | | hostId | 6cb7df02483c0ccab23c9a8a8494cad28a2b73a5a041c139ed27404b | 2026-03-16 01:28:10.221881 | orchestrator | | host_status | None | 2026-03-16 01:28:10.221893 | orchestrator | | id | 3a3ae2c4-7a76-4592-afcc-c0337638371a | 2026-03-16 01:28:10.221898 | orchestrator | | image | N/A (booted from volume) | 2026-03-16 01:28:10.221903 | orchestrator | | key_name | test | 2026-03-16 01:28:10.221908 | orchestrator | | locked | False | 2026-03-16 01:28:10.221912 | orchestrator | | locked_reason | None | 2026-03-16 01:28:10.221917 | orchestrator | | name | test | 2026-03-16 01:28:10.221921 | orchestrator | | pinned_availability_zone | None | 2026-03-16 01:28:10.221926 | orchestrator | | progress | 0 | 2026-03-16 01:28:10.221934 | orchestrator | | project_id | 8e4e872be8134367a7c85f8eced497c4 | 2026-03-16 01:28:10.221938 | orchestrator | | properties | hostname='test' | 2026-03-16 01:28:10.221950 | orchestrator | | security_groups | name='icmp' | 2026-03-16 01:28:10.221955 | orchestrator | | | name='ssh' | 2026-03-16 01:28:10.221960 | orchestrator | | server_groups | None | 2026-03-16 01:28:10.221964 | orchestrator | | status | ACTIVE | 2026-03-16 01:28:10.221969 | orchestrator | | tags | test | 2026-03-16 01:28:10.221973 | orchestrator | | trusted_image_certificates | None | 2026-03-16 01:28:10.221978 | orchestrator | | updated | 2026-03-16T01:27:15Z | 2026-03-16 01:28:10.221983 | orchestrator | | user_id | 1d42281c09d542d2b55f767ac1c07dfd | 2026-03-16 01:28:10.221990 | orchestrator | | volumes_attached | delete_on_termination='True', id='10af9a91-c9a4-4fe6-8829-58757569b683' | 2026-03-16 01:28:10.221998 | orchestrator | | | delete_on_termination='False', id='5bc98c30-c2b2-46af-a072-ad31e8f93266' | 2026-03-16 01:28:10.226370 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:10.484443 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-16 01:28:13.576901 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:13.576968 | orchestrator | | Field | Value | 2026-03-16 01:28:13.576974 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:13.576978 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-16 01:28:13.576982 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-16 01:28:13.576986 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-16 01:28:13.576990 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-16 01:28:13.577009 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-16 01:28:13.577013 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-16 01:28:13.577025 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-16 01:28:13.577030 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-16 01:28:13.577033 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-16 01:28:13.577037 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-16 01:28:13.577041 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-16 01:28:13.577045 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-16 01:28:13.577056 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-16 01:28:13.577082 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-16 01:28:13.577091 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-16 01:28:13.577097 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-16T01:26:44.000000 | 2026-03-16 01:28:13.577107 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-16 01:28:13.577113 | orchestrator | | accessIPv4 | | 2026-03-16 01:28:13.577119 | orchestrator | | accessIPv6 | | 2026-03-16 01:28:13.577125 | orchestrator | | addresses | test=192.168.112.167, 192.168.200.243 | 2026-03-16 01:28:13.577131 | orchestrator | | config_drive | | 2026-03-16 01:28:13.577137 | orchestrator | | created | 2026-03-16T01:26:17Z | 2026-03-16 01:28:13.577147 | orchestrator | | description | None | 2026-03-16 01:28:13.577152 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-16 01:28:13.577161 | orchestrator | | hostId | 6cb7df02483c0ccab23c9a8a8494cad28a2b73a5a041c139ed27404b | 2026-03-16 01:28:13.577166 | orchestrator | | host_status | None | 2026-03-16 01:28:13.577176 | orchestrator | | id | a4b55e5e-f178-4874-b74d-5c76d1e893c6 | 2026-03-16 01:28:13.577182 | orchestrator | | image | N/A (booted from volume) | 2026-03-16 01:28:13.577188 | orchestrator | | key_name | test | 2026-03-16 01:28:13.577193 | orchestrator | | locked | False | 2026-03-16 01:28:13.577198 | orchestrator | | locked_reason | None | 2026-03-16 01:28:13.577204 | orchestrator | | name | test-1 | 2026-03-16 01:28:13.577215 | orchestrator | | pinned_availability_zone | None | 2026-03-16 01:28:13.577224 | orchestrator | | progress | 0 | 2026-03-16 01:28:13.577230 | orchestrator | | project_id | 8e4e872be8134367a7c85f8eced497c4 | 2026-03-16 01:28:13.577236 | orchestrator | | properties | hostname='test-1' | 2026-03-16 01:28:13.577246 | orchestrator | | security_groups | name='icmp' | 2026-03-16 01:28:13.577252 | orchestrator | | | name='ssh' | 2026-03-16 01:28:13.577258 | orchestrator | | server_groups | None | 2026-03-16 01:28:13.577264 | orchestrator | | status | ACTIVE | 2026-03-16 01:28:13.577270 | orchestrator | | tags | test | 2026-03-16 01:28:13.577281 | orchestrator | | trusted_image_certificates | None | 2026-03-16 01:28:13.577287 | orchestrator | | updated | 2026-03-16T01:27:15Z | 2026-03-16 01:28:13.577296 | orchestrator | | user_id | 1d42281c09d542d2b55f767ac1c07dfd | 2026-03-16 01:28:13.577302 | orchestrator | | volumes_attached | delete_on_termination='True', id='fd834329-f150-4239-9474-65acf2bfff3d' | 2026-03-16 01:28:13.580597 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:13.826686 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-16 01:28:16.849535 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:16.849643 | orchestrator | | Field | Value | 2026-03-16 01:28:16.849664 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:16.849679 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-16 01:28:16.849723 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-16 01:28:16.849741 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-16 01:28:16.849756 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-16 01:28:16.849787 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-16 01:28:16.849802 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-16 01:28:16.849838 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-16 01:28:16.849855 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-16 01:28:16.849869 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-16 01:28:16.849883 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-16 01:28:16.849908 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-16 01:28:16.849924 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-16 01:28:16.849940 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-16 01:28:16.849955 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-16 01:28:16.849977 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-16 01:28:16.849993 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-16T01:26:46.000000 | 2026-03-16 01:28:16.850084 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-16 01:28:16.850143 | orchestrator | | accessIPv4 | | 2026-03-16 01:28:16.850160 | orchestrator | | accessIPv6 | | 2026-03-16 01:28:16.850194 | orchestrator | | addresses | test=192.168.112.196, 192.168.200.35 | 2026-03-16 01:28:16.850210 | orchestrator | | config_drive | | 2026-03-16 01:28:16.850224 | orchestrator | | created | 2026-03-16T01:26:17Z | 2026-03-16 01:28:16.850240 | orchestrator | | description | None | 2026-03-16 01:28:16.850255 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-16 01:28:16.850280 | orchestrator | | hostId | 324e366f4566681bf4e70c83c6e2f6708434ef57d977a563f79ffbb6 | 2026-03-16 01:28:16.850295 | orchestrator | | host_status | None | 2026-03-16 01:28:16.850323 | orchestrator | | id | e93f9cf0-2372-4fa3-91c1-2bce80c9218c | 2026-03-16 01:28:16.850339 | orchestrator | | image | N/A (booted from volume) | 2026-03-16 01:28:16.850353 | orchestrator | | key_name | test | 2026-03-16 01:28:16.850378 | orchestrator | | locked | False | 2026-03-16 01:28:16.850393 | orchestrator | | locked_reason | None | 2026-03-16 01:28:16.850407 | orchestrator | | name | test-2 | 2026-03-16 01:28:16.850423 | orchestrator | | pinned_availability_zone | None | 2026-03-16 01:28:16.850439 | orchestrator | | progress | 0 | 2026-03-16 01:28:16.850460 | orchestrator | | project_id | 8e4e872be8134367a7c85f8eced497c4 | 2026-03-16 01:28:16.850474 | orchestrator | | properties | hostname='test-2' | 2026-03-16 01:28:16.850497 | orchestrator | | security_groups | name='icmp' | 2026-03-16 01:28:16.850513 | orchestrator | | | name='ssh' | 2026-03-16 01:28:16.850537 | orchestrator | | server_groups | None | 2026-03-16 01:28:16.850552 | orchestrator | | status | ACTIVE | 2026-03-16 01:28:16.850566 | orchestrator | | tags | test | 2026-03-16 01:28:16.850581 | orchestrator | | trusted_image_certificates | None | 2026-03-16 01:28:16.850594 | orchestrator | | updated | 2026-03-16T01:27:16Z | 2026-03-16 01:28:16.850609 | orchestrator | | user_id | 1d42281c09d542d2b55f767ac1c07dfd | 2026-03-16 01:28:16.850630 | orchestrator | | volumes_attached | delete_on_termination='True', id='a6f73a0c-fb26-4e1d-aa21-62bc56837a48' | 2026-03-16 01:28:16.853725 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:17.101003 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-16 01:28:20.046078 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:20.046185 | orchestrator | | Field | Value | 2026-03-16 01:28:20.046193 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:20.046198 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-16 01:28:20.046203 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-16 01:28:20.046207 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-16 01:28:20.046212 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-16 01:28:20.046226 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-16 01:28:20.046231 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-16 01:28:20.046247 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-16 01:28:20.046257 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-16 01:28:20.046262 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-16 01:28:20.046266 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-16 01:28:20.046271 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-16 01:28:20.046275 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-16 01:28:20.046280 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-16 01:28:20.046284 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-16 01:28:20.046292 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-16 01:28:20.046296 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-16T01:26:46.000000 | 2026-03-16 01:28:20.046309 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-16 01:28:20.046314 | orchestrator | | accessIPv4 | | 2026-03-16 01:28:20.046318 | orchestrator | | accessIPv6 | | 2026-03-16 01:28:20.046325 | orchestrator | | addresses | test=192.168.112.163, 192.168.200.220 | 2026-03-16 01:28:20.046332 | orchestrator | | config_drive | | 2026-03-16 01:28:20.046339 | orchestrator | | created | 2026-03-16T01:26:17Z | 2026-03-16 01:28:20.046346 | orchestrator | | description | None | 2026-03-16 01:28:20.046352 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-16 01:28:20.046363 | orchestrator | | hostId | 324e366f4566681bf4e70c83c6e2f6708434ef57d977a563f79ffbb6 | 2026-03-16 01:28:20.046370 | orchestrator | | host_status | None | 2026-03-16 01:28:20.046388 | orchestrator | | id | 03f92a83-1629-4e1b-903e-ab8e98439f7e | 2026-03-16 01:28:20.046395 | orchestrator | | image | N/A (booted from volume) | 2026-03-16 01:28:20.046403 | orchestrator | | key_name | test | 2026-03-16 01:28:20.046410 | orchestrator | | locked | False | 2026-03-16 01:28:20.046417 | orchestrator | | locked_reason | None | 2026-03-16 01:28:20.046424 | orchestrator | | name | test-3 | 2026-03-16 01:28:20.046431 | orchestrator | | pinned_availability_zone | None | 2026-03-16 01:28:20.046438 | orchestrator | | progress | 0 | 2026-03-16 01:28:20.046445 | orchestrator | | project_id | 8e4e872be8134367a7c85f8eced497c4 | 2026-03-16 01:28:20.046459 | orchestrator | | properties | hostname='test-3' | 2026-03-16 01:28:20.046470 | orchestrator | | security_groups | name='icmp' | 2026-03-16 01:28:20.046479 | orchestrator | | | name='ssh' | 2026-03-16 01:28:20.046486 | orchestrator | | server_groups | None | 2026-03-16 01:28:20.046494 | orchestrator | | status | ACTIVE | 2026-03-16 01:28:20.046501 | orchestrator | | tags | test | 2026-03-16 01:28:20.046509 | orchestrator | | trusted_image_certificates | None | 2026-03-16 01:28:20.046516 | orchestrator | | updated | 2026-03-16T01:27:17Z | 2026-03-16 01:28:20.046523 | orchestrator | | user_id | 1d42281c09d542d2b55f767ac1c07dfd | 2026-03-16 01:28:20.046818 | orchestrator | | volumes_attached | delete_on_termination='True', id='bfbd72fa-6545-4e4f-b12c-fcf767b22b1b' | 2026-03-16 01:28:20.048010 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:20.359238 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-16 01:28:23.246728 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:23.246851 | orchestrator | | Field | Value | 2026-03-16 01:28:23.246876 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:23.246894 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-16 01:28:23.246906 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-16 01:28:23.246932 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-16 01:28:23.246943 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-16 01:28:23.246975 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-16 01:28:23.246986 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-16 01:28:23.247021 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-16 01:28:23.247033 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-16 01:28:23.247043 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-16 01:28:23.247053 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-16 01:28:23.247063 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-16 01:28:23.247073 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-16 01:28:23.247088 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-16 01:28:23.247098 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-16 01:28:23.247115 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-16 01:28:23.247125 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-16T01:26:46.000000 | 2026-03-16 01:28:23.247141 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-16 01:28:23.247152 | orchestrator | | accessIPv4 | | 2026-03-16 01:28:23.247162 | orchestrator | | accessIPv6 | | 2026-03-16 01:28:23.247172 | orchestrator | | addresses | test=192.168.112.130, 192.168.200.229 | 2026-03-16 01:28:23.247212 | orchestrator | | config_drive | | 2026-03-16 01:28:23.247231 | orchestrator | | created | 2026-03-16T01:26:19Z | 2026-03-16 01:28:23.247249 | orchestrator | | description | None | 2026-03-16 01:28:23.247267 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-16 01:28:23.247279 | orchestrator | | hostId | 324e366f4566681bf4e70c83c6e2f6708434ef57d977a563f79ffbb6 | 2026-03-16 01:28:23.247291 | orchestrator | | host_status | None | 2026-03-16 01:28:23.247309 | orchestrator | | id | 92494364-be72-4c60-adbf-690d5eb3aa6f | 2026-03-16 01:28:23.247321 | orchestrator | | image | N/A (booted from volume) | 2026-03-16 01:28:23.247333 | orchestrator | | key_name | test | 2026-03-16 01:28:23.247344 | orchestrator | | locked | False | 2026-03-16 01:28:23.247355 | orchestrator | | locked_reason | None | 2026-03-16 01:28:23.247367 | orchestrator | | name | test-4 | 2026-03-16 01:28:23.247389 | orchestrator | | pinned_availability_zone | None | 2026-03-16 01:28:23.247401 | orchestrator | | progress | 0 | 2026-03-16 01:28:23.247412 | orchestrator | | project_id | 8e4e872be8134367a7c85f8eced497c4 | 2026-03-16 01:28:23.247423 | orchestrator | | properties | hostname='test-4' | 2026-03-16 01:28:23.247441 | orchestrator | | security_groups | name='icmp' | 2026-03-16 01:28:23.247453 | orchestrator | | | name='ssh' | 2026-03-16 01:28:23.247463 | orchestrator | | server_groups | None | 2026-03-16 01:28:23.247474 | orchestrator | | status | ACTIVE | 2026-03-16 01:28:23.247491 | orchestrator | | tags | test | 2026-03-16 01:28:23.247516 | orchestrator | | trusted_image_certificates | None | 2026-03-16 01:28:23.247539 | orchestrator | | updated | 2026-03-16T01:27:18Z | 2026-03-16 01:28:23.247555 | orchestrator | | user_id | 1d42281c09d542d2b55f767ac1c07dfd | 2026-03-16 01:28:23.247587 | orchestrator | | volumes_attached | delete_on_termination='True', id='0ab9bdce-7a45-4029-9778-457435ba6c78' | 2026-03-16 01:28:23.251005 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-16 01:28:23.495431 | orchestrator | + server_ping 2026-03-16 01:28:23.496071 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-16 01:28:23.496278 | orchestrator | ++ tr -d '\r' 2026-03-16 01:28:26.392748 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-16 01:28:26.392862 | orchestrator | + ping -c3 192.168.112.130 2026-03-16 01:28:26.410010 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2026-03-16 01:28:26.410335 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=6.37 ms 2026-03-16 01:28:27.407563 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.31 ms 2026-03-16 01:28:28.408879 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.72 ms 2026-03-16 01:28:28.409690 | orchestrator | 2026-03-16 01:28:28.409718 | orchestrator | --- 192.168.112.130 ping statistics --- 2026-03-16 01:28:28.409726 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-16 01:28:28.409733 | orchestrator | rtt min/avg/max/mdev = 1.724/3.466/6.365/2.063 ms 2026-03-16 01:28:28.409750 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-16 01:28:28.409757 | orchestrator | + ping -c3 192.168.112.163 2026-03-16 01:28:28.419782 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2026-03-16 01:28:28.419848 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=6.19 ms 2026-03-16 01:28:29.418754 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=2.66 ms 2026-03-16 01:28:30.419614 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=1.68 ms 2026-03-16 01:28:30.419678 | orchestrator | 2026-03-16 01:28:30.419685 | orchestrator | --- 192.168.112.163 ping statistics --- 2026-03-16 01:28:30.419690 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-16 01:28:30.419713 | orchestrator | rtt min/avg/max/mdev = 1.675/3.510/6.194/1.939 ms 2026-03-16 01:28:30.419724 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-16 01:28:30.419729 | orchestrator | + ping -c3 192.168.112.196 2026-03-16 01:28:30.427675 | orchestrator | PING 192.168.112.196 (192.168.112.196) 56(84) bytes of data. 2026-03-16 01:28:30.427770 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=1 ttl=63 time=5.37 ms 2026-03-16 01:28:31.426161 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=2 ttl=63 time=3.10 ms 2026-03-16 01:28:32.427114 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=3 ttl=63 time=1.89 ms 2026-03-16 01:28:32.427200 | orchestrator | 2026-03-16 01:28:32.427214 | orchestrator | --- 192.168.112.196 ping statistics --- 2026-03-16 01:28:32.427227 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-16 01:28:32.427237 | orchestrator | rtt min/avg/max/mdev = 1.889/3.454/5.371/1.442 ms 2026-03-16 01:28:32.427248 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-16 01:28:32.427259 | orchestrator | + ping -c3 192.168.112.167 2026-03-16 01:28:32.437706 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2026-03-16 01:28:32.437792 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=6.05 ms 2026-03-16 01:28:33.435612 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.43 ms 2026-03-16 01:28:34.437104 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=2.15 ms 2026-03-16 01:28:34.437197 | orchestrator | 2026-03-16 01:28:34.437210 | orchestrator | --- 192.168.112.167 ping statistics --- 2026-03-16 01:28:34.437218 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-16 01:28:34.437225 | orchestrator | rtt min/avg/max/mdev = 2.151/3.545/6.054/1.777 ms 2026-03-16 01:28:34.437761 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-16 01:28:34.437798 | orchestrator | + ping -c3 192.168.112.118 2026-03-16 01:28:34.449551 | orchestrator | PING 192.168.112.118 (192.168.112.118) 56(84) bytes of data. 2026-03-16 01:28:34.449628 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=1 ttl=63 time=7.17 ms 2026-03-16 01:28:35.446182 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=2 ttl=63 time=2.39 ms 2026-03-16 01:28:36.447920 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=3 ttl=63 time=1.90 ms 2026-03-16 01:28:36.448025 | orchestrator | 2026-03-16 01:28:36.448036 | orchestrator | --- 192.168.112.118 ping statistics --- 2026-03-16 01:28:36.448044 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-16 01:28:36.448051 | orchestrator | rtt min/avg/max/mdev = 1.898/3.818/7.166/2.375 ms 2026-03-16 01:28:36.448058 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-16 01:28:36.610186 | orchestrator | ok: Runtime: 0:07:57.821252 2026-03-16 01:28:36.658482 | 2026-03-16 01:28:36.658734 | TASK [Run tempest] 2026-03-16 01:28:37.405313 | orchestrator | 2026-03-16 01:28:37.405588 | orchestrator | # Tempest 2026-03-16 01:28:37.405605 | orchestrator | 2026-03-16 01:28:37.405612 | orchestrator | + set -e 2026-03-16 01:28:37.405622 | orchestrator | + source /opt/manager-vars.sh 2026-03-16 01:28:37.405631 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-16 01:28:37.405641 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-16 01:28:37.405664 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-16 01:28:37.405674 | orchestrator | ++ CEPH_VERSION=reef 2026-03-16 01:28:37.405682 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-16 01:28:37.405689 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-16 01:28:37.405702 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-16 01:28:37.405709 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-16 01:28:37.405715 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-16 01:28:37.405724 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-16 01:28:37.405729 | orchestrator | ++ export ARA=false 2026-03-16 01:28:37.405734 | orchestrator | ++ ARA=false 2026-03-16 01:28:37.405745 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-16 01:28:37.405751 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-16 01:28:37.405756 | orchestrator | ++ export TEMPEST=true 2026-03-16 01:28:37.405763 | orchestrator | ++ TEMPEST=true 2026-03-16 01:28:37.405771 | orchestrator | ++ export IS_ZUUL=true 2026-03-16 01:28:37.405779 | orchestrator | ++ IS_ZUUL=true 2026-03-16 01:28:37.405788 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 01:28:37.405796 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.83 2026-03-16 01:28:37.405803 | orchestrator | ++ export EXTERNAL_API=false 2026-03-16 01:28:37.405811 | orchestrator | ++ EXTERNAL_API=false 2026-03-16 01:28:37.405818 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-16 01:28:37.405827 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-16 01:28:37.405835 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-16 01:28:37.405844 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-16 01:28:37.405853 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-16 01:28:37.405861 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-16 01:28:37.405869 | orchestrator | + echo 2026-03-16 01:28:37.405875 | orchestrator | + echo '# Tempest' 2026-03-16 01:28:37.405880 | orchestrator | + echo 2026-03-16 01:28:37.405885 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-03-16 01:28:37.405890 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-03-16 01:28:49.565147 | orchestrator | 2026-03-16 01:28:49 | INFO  | Task f429b4bc-21fe-42d8-8691-8ef5ee2217e4 (tempest) was prepared for execution. 2026-03-16 01:28:49.565253 | orchestrator | 2026-03-16 01:28:49 | INFO  | It takes a moment until task f429b4bc-21fe-42d8-8691-8ef5ee2217e4 (tempest) has been started and output is visible here. 2026-03-16 01:30:07.262098 | orchestrator | 2026-03-16 01:30:07.262172 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-03-16 01:30:07.262181 | orchestrator | 2026-03-16 01:30:07.262188 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-03-16 01:30:07.262201 | orchestrator | Monday 16 March 2026 01:28:53 +0000 (0:00:00.268) 0:00:00.269 ********** 2026-03-16 01:30:07.262206 | orchestrator | changed: [testbed-manager] 2026-03-16 01:30:07.262212 | orchestrator | 2026-03-16 01:30:07.262217 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-03-16 01:30:07.262222 | orchestrator | Monday 16 March 2026 01:28:54 +0000 (0:00:00.701) 0:00:00.970 ********** 2026-03-16 01:30:07.262227 | orchestrator | changed: [testbed-manager] 2026-03-16 01:30:07.262233 | orchestrator | 2026-03-16 01:30:07.262238 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-03-16 01:30:07.262243 | orchestrator | Monday 16 March 2026 01:28:55 +0000 (0:00:01.268) 0:00:02.238 ********** 2026-03-16 01:30:07.262248 | orchestrator | ok: [testbed-manager] 2026-03-16 01:30:07.262253 | orchestrator | 2026-03-16 01:30:07.262258 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-03-16 01:30:07.262263 | orchestrator | Monday 16 March 2026 01:28:56 +0000 (0:00:00.479) 0:00:02.718 ********** 2026-03-16 01:30:07.262268 | orchestrator | changed: [testbed-manager] 2026-03-16 01:30:07.262273 | orchestrator | 2026-03-16 01:30:07.262278 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-03-16 01:30:07.262282 | orchestrator | Monday 16 March 2026 01:29:17 +0000 (0:00:20.966) 0:00:23.685 ********** 2026-03-16 01:30:07.262287 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-03-16 01:30:07.262367 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-03-16 01:30:07.262374 | orchestrator | 2026-03-16 01:30:07.262382 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-03-16 01:30:07.262387 | orchestrator | Monday 16 March 2026 01:29:25 +0000 (0:00:08.174) 0:00:31.859 ********** 2026-03-16 01:30:07.262391 | orchestrator | ok: [testbed-manager] => { 2026-03-16 01:30:07.262396 | orchestrator |  "changed": false, 2026-03-16 01:30:07.262401 | orchestrator |  "msg": "All assertions passed" 2026-03-16 01:30:07.262406 | orchestrator | } 2026-03-16 01:30:07.262412 | orchestrator | 2026-03-16 01:30:07.262416 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-03-16 01:30:07.262421 | orchestrator | Monday 16 March 2026 01:29:25 +0000 (0:00:00.163) 0:00:32.023 ********** 2026-03-16 01:30:07.262426 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:30:07.262431 | orchestrator | 2026-03-16 01:30:07.262436 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-03-16 01:30:07.262440 | orchestrator | Monday 16 March 2026 01:29:29 +0000 (0:00:03.486) 0:00:35.509 ********** 2026-03-16 01:30:07.262445 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:30:07.262450 | orchestrator | 2026-03-16 01:30:07.262455 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-03-16 01:30:07.262460 | orchestrator | Monday 16 March 2026 01:29:30 +0000 (0:00:01.683) 0:00:37.192 ********** 2026-03-16 01:30:07.262465 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:30:07.262469 | orchestrator | 2026-03-16 01:30:07.262474 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-03-16 01:30:07.262479 | orchestrator | Monday 16 March 2026 01:29:34 +0000 (0:00:03.489) 0:00:40.682 ********** 2026-03-16 01:30:07.262484 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:30:07.262488 | orchestrator | 2026-03-16 01:30:07.262493 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-03-16 01:30:07.262498 | orchestrator | Monday 16 March 2026 01:29:34 +0000 (0:00:00.204) 0:00:40.887 ********** 2026-03-16 01:30:07.262503 | orchestrator | changed: [testbed-manager] 2026-03-16 01:30:07.262508 | orchestrator | 2026-03-16 01:30:07.262513 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-03-16 01:30:07.262517 | orchestrator | Monday 16 March 2026 01:29:36 +0000 (0:00:02.289) 0:00:43.177 ********** 2026-03-16 01:30:07.262522 | orchestrator | changed: [testbed-manager] 2026-03-16 01:30:07.262527 | orchestrator | 2026-03-16 01:30:07.262532 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-03-16 01:30:07.262536 | orchestrator | Monday 16 March 2026 01:29:47 +0000 (0:00:10.616) 0:00:53.794 ********** 2026-03-16 01:30:07.262541 | orchestrator | changed: [testbed-manager] 2026-03-16 01:30:07.262546 | orchestrator | 2026-03-16 01:30:07.262551 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-03-16 01:30:07.262556 | orchestrator | Monday 16 March 2026 01:29:48 +0000 (0:00:00.706) 0:00:54.500 ********** 2026-03-16 01:30:07.262560 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:30:07.262565 | orchestrator | 2026-03-16 01:30:07.262570 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-03-16 01:30:07.262575 | orchestrator | Monday 16 March 2026 01:29:49 +0000 (0:00:01.524) 0:00:56.025 ********** 2026-03-16 01:30:07.262580 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:30:07.262584 | orchestrator | 2026-03-16 01:30:07.262589 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-03-16 01:30:07.262594 | orchestrator | Monday 16 March 2026 01:29:51 +0000 (0:00:01.569) 0:00:57.594 ********** 2026-03-16 01:30:07.262604 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:30:07.262609 | orchestrator | 2026-03-16 01:30:07.262614 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-03-16 01:30:07.262618 | orchestrator | Monday 16 March 2026 01:29:51 +0000 (0:00:00.176) 0:00:57.770 ********** 2026-03-16 01:30:07.262628 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:30:07.262633 | orchestrator | 2026-03-16 01:30:07.262638 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-03-16 01:30:07.262643 | orchestrator | Monday 16 March 2026 01:29:51 +0000 (0:00:00.179) 0:00:57.950 ********** 2026-03-16 01:30:07.262648 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-16 01:30:07.262652 | orchestrator | 2026-03-16 01:30:07.262657 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-03-16 01:30:07.262675 | orchestrator | Monday 16 March 2026 01:29:55 +0000 (0:00:04.000) 0:01:01.950 ********** 2026-03-16 01:30:07.262681 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-03-16 01:30:07.262686 | orchestrator |  "changed": false, 2026-03-16 01:30:07.262691 | orchestrator |  "msg": "All assertions passed" 2026-03-16 01:30:07.262695 | orchestrator | } 2026-03-16 01:30:07.262700 | orchestrator | 2026-03-16 01:30:07.262705 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-03-16 01:30:07.262710 | orchestrator | Monday 16 March 2026 01:29:55 +0000 (0:00:00.198) 0:01:02.148 ********** 2026-03-16 01:30:07.262715 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-03-16 01:30:07.262720 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-03-16 01:30:07.262725 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:30:07.262730 | orchestrator | 2026-03-16 01:30:07.262735 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-03-16 01:30:07.262740 | orchestrator | Monday 16 March 2026 01:29:56 +0000 (0:00:00.394) 0:01:02.543 ********** 2026-03-16 01:30:07.262744 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:30:07.262749 | orchestrator | 2026-03-16 01:30:07.262754 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-03-16 01:30:07.262759 | orchestrator | Monday 16 March 2026 01:29:56 +0000 (0:00:00.163) 0:01:02.707 ********** 2026-03-16 01:30:07.262764 | orchestrator | ok: [testbed-manager] 2026-03-16 01:30:07.262768 | orchestrator | 2026-03-16 01:30:07.262773 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-03-16 01:30:07.262778 | orchestrator | Monday 16 March 2026 01:29:56 +0000 (0:00:00.501) 0:01:03.208 ********** 2026-03-16 01:30:07.262783 | orchestrator | changed: [testbed-manager] 2026-03-16 01:30:07.262788 | orchestrator | 2026-03-16 01:30:07.262792 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-03-16 01:30:07.262797 | orchestrator | Monday 16 March 2026 01:29:57 +0000 (0:00:00.909) 0:01:04.118 ********** 2026-03-16 01:30:07.262802 | orchestrator | ok: [testbed-manager] 2026-03-16 01:30:07.262807 | orchestrator | 2026-03-16 01:30:07.262812 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-03-16 01:30:07.262816 | orchestrator | Monday 16 March 2026 01:29:58 +0000 (0:00:00.447) 0:01:04.566 ********** 2026-03-16 01:30:07.262821 | orchestrator | skipping: [testbed-manager] 2026-03-16 01:30:07.262826 | orchestrator | 2026-03-16 01:30:07.262831 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-03-16 01:30:07.262836 | orchestrator | Monday 16 March 2026 01:29:58 +0000 (0:00:00.146) 0:01:04.712 ********** 2026-03-16 01:30:07.262840 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-03-16 01:30:07.262845 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-03-16 01:30:07.262850 | orchestrator | 2026-03-16 01:30:07.262855 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-03-16 01:30:07.262860 | orchestrator | Monday 16 March 2026 01:30:06 +0000 (0:00:07.826) 0:01:12.539 ********** 2026-03-16 01:30:07.262865 | orchestrator | changed: [testbed-manager] 2026-03-16 01:30:07.262869 | orchestrator | 2026-03-16 01:30:07.262874 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-16 01:30:07.262883 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-16 01:30:07.262889 | orchestrator | 2026-03-16 01:30:07.262894 | orchestrator | 2026-03-16 01:30:07.262899 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-16 01:30:07.262903 | orchestrator | Monday 16 March 2026 01:30:07 +0000 (0:00:00.978) 0:01:13.517 ********** 2026-03-16 01:30:07.262908 | orchestrator | =============================================================================== 2026-03-16 01:30:07.262913 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 20.97s 2026-03-16 01:30:07.262918 | orchestrator | osism.validations.tempest : Install qemu-utils package ----------------- 10.62s 2026-03-16 01:30:07.262922 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.17s 2026-03-16 01:30:07.262927 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.83s 2026-03-16 01:30:07.262932 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 4.00s 2026-03-16 01:30:07.262937 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.49s 2026-03-16 01:30:07.262942 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.49s 2026-03-16 01:30:07.262946 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.29s 2026-03-16 01:30:07.262951 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.68s 2026-03-16 01:30:07.262956 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.57s 2026-03-16 01:30:07.262964 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.52s 2026-03-16 01:30:07.262969 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.27s 2026-03-16 01:30:07.262974 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 0.98s 2026-03-16 01:30:07.262979 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.91s 2026-03-16 01:30:07.262983 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.71s 2026-03-16 01:30:07.262988 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 0.70s 2026-03-16 01:30:07.262993 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.50s 2026-03-16 01:30:07.263001 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.48s 2026-03-16 01:30:07.504454 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.45s 2026-03-16 01:30:07.504550 | orchestrator | osism.validations.tempest : Resolve flavor IDs -------------------------- 0.39s 2026-03-16 01:30:07.711733 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-03-16 01:30:07.716663 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-03-16 01:30:07.720618 | orchestrator | 2026-03-16 01:30:07.720700 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-16 01:30:07.720713 | orchestrator | + echo 2026-03-16 01:30:07.720724 | orchestrator | ## IDENTITY (API) 2026-03-16 01:30:07.720735 | orchestrator | 2026-03-16 01:30:07.720747 | orchestrator | + echo '## IDENTITY (API)' 2026-03-16 01:30:07.720764 | orchestrator | + echo 2026-03-16 01:30:07.720778 | orchestrator | + _tempest tempest.api.identity.v3 2026-03-16 01:30:07.720789 | orchestrator | + local regex=tempest.api.identity.v3 2026-03-16 01:30:07.722173 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-03-16 01:30:07.722237 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-16 01:30:07.723194 | orchestrator | + tee -a /opt/tempest/20260316-0130.log 2026-03-16 01:30:11.132103 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-16 01:30:11.132200 | orchestrator | Did you mean one of these? 2026-03-16 01:30:11.132246 | orchestrator | help 2026-03-16 01:30:11.132258 | orchestrator | init 2026-03-16 01:30:11.422059 | orchestrator | 2026-03-16 01:30:11.422129 | orchestrator | ## IMAGE (API) 2026-03-16 01:30:11.422137 | orchestrator | 2026-03-16 01:30:11.422142 | orchestrator | + echo 2026-03-16 01:30:11.422148 | orchestrator | + echo '## IMAGE (API)' 2026-03-16 01:30:11.422155 | orchestrator | + echo 2026-03-16 01:30:11.422160 | orchestrator | + _tempest tempest.api.image.v2 2026-03-16 01:30:11.422166 | orchestrator | + local regex=tempest.api.image.v2 2026-03-16 01:30:11.422765 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-03-16 01:30:11.422870 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-16 01:30:11.424878 | orchestrator | + tee -a /opt/tempest/20260316-0130.log 2026-03-16 01:30:14.656815 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-16 01:30:14.656880 | orchestrator | Did you mean one of these? 2026-03-16 01:30:14.656891 | orchestrator | help 2026-03-16 01:30:14.656898 | orchestrator | init 2026-03-16 01:30:14.930648 | orchestrator | 2026-03-16 01:30:14.930716 | orchestrator | ## NETWORK (API) 2026-03-16 01:30:14.930722 | orchestrator | 2026-03-16 01:30:14.930727 | orchestrator | + echo 2026-03-16 01:30:14.930732 | orchestrator | + echo '## NETWORK (API)' 2026-03-16 01:30:14.930738 | orchestrator | + echo 2026-03-16 01:30:14.930743 | orchestrator | + _tempest tempest.api.network 2026-03-16 01:30:14.930749 | orchestrator | + local regex=tempest.api.network 2026-03-16 01:30:14.931311 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-03-16 01:30:14.932331 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-16 01:30:14.937901 | orchestrator | + tee -a /opt/tempest/20260316-0130.log 2026-03-16 01:30:18.173941 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-16 01:30:18.174067 | orchestrator | Did you mean one of these? 2026-03-16 01:30:18.174083 | orchestrator | help 2026-03-16 01:30:18.174090 | orchestrator | init 2026-03-16 01:30:18.466355 | orchestrator | 2026-03-16 01:30:18.466636 | orchestrator | ## VOLUME (API) 2026-03-16 01:30:18.466690 | orchestrator | 2026-03-16 01:30:18.466697 | orchestrator | + echo 2026-03-16 01:30:18.466704 | orchestrator | + echo '## VOLUME (API)' 2026-03-16 01:30:18.466714 | orchestrator | + echo 2026-03-16 01:30:18.466723 | orchestrator | + _tempest tempest.api.volume 2026-03-16 01:30:18.466733 | orchestrator | + local regex=tempest.api.volume 2026-03-16 01:30:18.467170 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-03-16 01:30:18.467611 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-16 01:30:18.470690 | orchestrator | + tee -a /opt/tempest/20260316-0130.log 2026-03-16 01:30:21.808267 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-16 01:30:21.808401 | orchestrator | Did you mean one of these? 2026-03-16 01:30:21.808434 | orchestrator | help 2026-03-16 01:30:21.808484 | orchestrator | init 2026-03-16 01:30:22.151911 | orchestrator | 2026-03-16 01:30:22.151990 | orchestrator | ## COMPUTE (API) 2026-03-16 01:30:22.152001 | orchestrator | 2026-03-16 01:30:22.152012 | orchestrator | + echo 2026-03-16 01:30:22.152020 | orchestrator | + echo '## COMPUTE (API)' 2026-03-16 01:30:22.152028 | orchestrator | + echo 2026-03-16 01:30:22.152036 | orchestrator | + _tempest tempest.api.compute 2026-03-16 01:30:22.152043 | orchestrator | + local regex=tempest.api.compute 2026-03-16 01:30:22.152595 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-03-16 01:30:22.154427 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-16 01:30:22.156267 | orchestrator | + tee -a /opt/tempest/20260316-0130.log 2026-03-16 01:30:25.711871 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-16 01:30:25.712000 | orchestrator | Did you mean one of these? 2026-03-16 01:30:25.712024 | orchestrator | help 2026-03-16 01:30:25.712039 | orchestrator | init 2026-03-16 01:30:26.166157 | orchestrator | 2026-03-16 01:30:26.166238 | orchestrator | ## DNS (API) 2026-03-16 01:30:26.166248 | orchestrator | 2026-03-16 01:30:26.166255 | orchestrator | + echo 2026-03-16 01:30:26.166270 | orchestrator | + echo '## DNS (API)' 2026-03-16 01:30:26.166279 | orchestrator | + echo 2026-03-16 01:30:26.166286 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-03-16 01:30:26.166294 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-03-16 01:30:26.166312 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-03-16 01:30:26.169083 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-16 01:30:26.173637 | orchestrator | + tee -a /opt/tempest/20260316-0130.log 2026-03-16 01:30:29.799692 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-16 01:30:29.799791 | orchestrator | Did you mean one of these? 2026-03-16 01:30:29.799806 | orchestrator | help 2026-03-16 01:30:29.799817 | orchestrator | init 2026-03-16 01:30:30.203771 | orchestrator | 2026-03-16 01:30:30.203878 | orchestrator | ## OBJECT-STORE (API) 2026-03-16 01:30:30.203901 | orchestrator | 2026-03-16 01:30:30.203918 | orchestrator | + echo 2026-03-16 01:30:30.203935 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-03-16 01:30:30.203951 | orchestrator | + echo 2026-03-16 01:30:30.203968 | orchestrator | + _tempest tempest.api.object_storage 2026-03-16 01:30:30.203986 | orchestrator | + local regex=tempest.api.object_storage 2026-03-16 01:30:30.204397 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-03-16 01:30:30.205894 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-16 01:30:30.206834 | orchestrator | + tee -a /opt/tempest/20260316-0130.log 2026-03-16 01:30:33.925625 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-16 01:30:33.925739 | orchestrator | Did you mean one of these? 2026-03-16 01:30:33.925757 | orchestrator | help 2026-03-16 01:30:33.925770 | orchestrator | init 2026-03-16 01:30:34.773988 | orchestrator | ok: Runtime: 0:01:57.389486 2026-03-16 01:30:34.799146 | 2026-03-16 01:30:34.799321 | TASK [Check prometheus alert status] 2026-03-16 01:30:35.348432 | orchestrator | skipping: Conditional result was False 2026-03-16 01:30:35.350149 | 2026-03-16 01:30:35.350235 | PLAY RECAP 2026-03-16 01:30:35.350293 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-03-16 01:30:35.350319 | 2026-03-16 01:30:35.572655 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-16 01:30:35.573950 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-16 01:30:36.320872 | 2026-03-16 01:30:36.321034 | PLAY [Post output play] 2026-03-16 01:30:36.338000 | 2026-03-16 01:30:36.338144 | LOOP [stage-output : Register sources] 2026-03-16 01:30:36.400134 | 2026-03-16 01:30:36.400437 | TASK [stage-output : Check sudo] 2026-03-16 01:30:37.343882 | orchestrator | sudo: a password is required 2026-03-16 01:30:37.439032 | orchestrator | ok: Runtime: 0:00:00.011296 2026-03-16 01:30:37.457270 | 2026-03-16 01:30:37.457434 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-16 01:30:37.490292 | 2026-03-16 01:30:37.490598 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-16 01:30:37.561565 | orchestrator | ok 2026-03-16 01:30:37.575665 | 2026-03-16 01:30:37.576957 | LOOP [stage-output : Ensure target folders exist] 2026-03-16 01:30:38.109395 | orchestrator | ok: "docs" 2026-03-16 01:30:38.109652 | 2026-03-16 01:30:38.372637 | orchestrator | ok: "artifacts" 2026-03-16 01:30:38.632890 | orchestrator | ok: "logs" 2026-03-16 01:30:38.651993 | 2026-03-16 01:30:38.654728 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-16 01:30:38.691942 | 2026-03-16 01:30:38.692158 | TASK [stage-output : Make all log files readable] 2026-03-16 01:30:38.994200 | orchestrator | ok 2026-03-16 01:30:39.001997 | 2026-03-16 01:30:39.002120 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-16 01:30:39.039451 | orchestrator | skipping: Conditional result was False 2026-03-16 01:30:39.050622 | 2026-03-16 01:30:39.050749 | TASK [stage-output : Discover log files for compression] 2026-03-16 01:30:39.075332 | orchestrator | skipping: Conditional result was False 2026-03-16 01:30:39.090571 | 2026-03-16 01:30:39.090739 | LOOP [stage-output : Archive everything from logs] 2026-03-16 01:30:39.128221 | 2026-03-16 01:30:39.128393 | PLAY [Post cleanup play] 2026-03-16 01:30:39.136307 | 2026-03-16 01:30:39.136475 | TASK [Set cloud fact (Zuul deployment)] 2026-03-16 01:30:39.200945 | orchestrator | ok 2026-03-16 01:30:39.209325 | 2026-03-16 01:30:39.209453 | TASK [Set cloud fact (local deployment)] 2026-03-16 01:30:39.244126 | orchestrator | skipping: Conditional result was False 2026-03-16 01:30:39.253458 | 2026-03-16 01:30:39.253606 | TASK [Clean the cloud environment] 2026-03-16 01:30:39.874080 | orchestrator | 2026-03-16 01:30:39 - clean up servers 2026-03-16 01:30:40.673373 | orchestrator | 2026-03-16 01:30:40 - testbed-manager 2026-03-16 01:30:40.773315 | orchestrator | 2026-03-16 01:30:40 - testbed-node-5 2026-03-16 01:30:40.857339 | orchestrator | 2026-03-16 01:30:40 - testbed-node-2 2026-03-16 01:30:40.944190 | orchestrator | 2026-03-16 01:30:40 - testbed-node-4 2026-03-16 01:30:41.042501 | orchestrator | 2026-03-16 01:30:41 - testbed-node-1 2026-03-16 01:30:41.126455 | orchestrator | 2026-03-16 01:30:41 - testbed-node-3 2026-03-16 01:30:41.218907 | orchestrator | 2026-03-16 01:30:41 - testbed-node-0 2026-03-16 01:30:41.311475 | orchestrator | 2026-03-16 01:30:41 - clean up keypairs 2026-03-16 01:30:41.327954 | orchestrator | 2026-03-16 01:30:41 - testbed 2026-03-16 01:30:41.353473 | orchestrator | 2026-03-16 01:30:41 - wait for servers to be gone 2026-03-16 01:30:52.191233 | orchestrator | 2026-03-16 01:30:52 - clean up ports 2026-03-16 01:30:52.378207 | orchestrator | 2026-03-16 01:30:52 - 47da8cbc-792f-42a6-a63e-385f9740f42b 2026-03-16 01:30:52.608103 | orchestrator | 2026-03-16 01:30:52 - 4b3e1b6d-5967-4aca-b241-a4e1190fd0ab 2026-03-16 01:30:52.917935 | orchestrator | 2026-03-16 01:30:52 - 769b0d77-6b0a-404a-9bae-2e35416beebc 2026-03-16 01:30:53.216347 | orchestrator | 2026-03-16 01:30:53 - 7ce9516c-9f17-416a-a91e-359565e6bfe4 2026-03-16 01:30:53.455301 | orchestrator | 2026-03-16 01:30:53 - 7ef17a65-14a5-45ad-8d7d-6b2d58ba90b3 2026-03-16 01:30:53.694171 | orchestrator | 2026-03-16 01:30:53 - b88deca5-ee76-4d7c-ae0f-026fce88e02c 2026-03-16 01:30:53.910932 | orchestrator | 2026-03-16 01:30:53 - bb656fc8-9351-4139-b8df-448b84278644 2026-03-16 01:30:54.370710 | orchestrator | 2026-03-16 01:30:54 - clean up volumes 2026-03-16 01:30:54.517654 | orchestrator | 2026-03-16 01:30:54 - testbed-volume-2-node-base 2026-03-16 01:30:54.563904 | orchestrator | 2026-03-16 01:30:54 - testbed-volume-3-node-base 2026-03-16 01:30:54.612039 | orchestrator | 2026-03-16 01:30:54 - testbed-volume-0-node-base 2026-03-16 01:30:54.657300 | orchestrator | 2026-03-16 01:30:54 - testbed-volume-5-node-base 2026-03-16 01:30:54.706112 | orchestrator | 2026-03-16 01:30:54 - testbed-volume-manager-base 2026-03-16 01:30:54.753003 | orchestrator | 2026-03-16 01:30:54 - testbed-volume-1-node-base 2026-03-16 01:30:54.794012 | orchestrator | 2026-03-16 01:30:54 - testbed-volume-4-node-4 2026-03-16 01:30:54.843737 | orchestrator | 2026-03-16 01:30:54 - testbed-volume-3-node-3 2026-03-16 01:30:54.897406 | orchestrator | 2026-03-16 01:30:54 - testbed-volume-4-node-base 2026-03-16 01:30:54.944368 | orchestrator | 2026-03-16 01:30:54 - testbed-volume-2-node-5 2026-03-16 01:30:54.992758 | orchestrator | 2026-03-16 01:30:54 - testbed-volume-6-node-3 2026-03-16 01:30:55.045552 | orchestrator | 2026-03-16 01:30:55 - testbed-volume-7-node-4 2026-03-16 01:30:55.091589 | orchestrator | 2026-03-16 01:30:55 - testbed-volume-5-node-5 2026-03-16 01:30:55.144132 | orchestrator | 2026-03-16 01:30:55 - testbed-volume-0-node-3 2026-03-16 01:30:55.190313 | orchestrator | 2026-03-16 01:30:55 - testbed-volume-8-node-5 2026-03-16 01:30:55.233788 | orchestrator | 2026-03-16 01:30:55 - testbed-volume-1-node-4 2026-03-16 01:30:55.280578 | orchestrator | 2026-03-16 01:30:55 - disconnect routers 2026-03-16 01:30:55.457834 | orchestrator | 2026-03-16 01:30:55 - testbed 2026-03-16 01:30:56.649149 | orchestrator | 2026-03-16 01:30:56 - clean up subnets 2026-03-16 01:30:57.219465 | orchestrator | 2026-03-16 01:30:57 - subnet-testbed-management 2026-03-16 01:30:57.462372 | orchestrator | 2026-03-16 01:30:57 - clean up networks 2026-03-16 01:30:57.656104 | orchestrator | 2026-03-16 01:30:57 - net-testbed-management 2026-03-16 01:30:57.979274 | orchestrator | 2026-03-16 01:30:57 - clean up security groups 2026-03-16 01:30:58.040974 | orchestrator | 2026-03-16 01:30:58 - testbed-node 2026-03-16 01:30:58.202275 | orchestrator | 2026-03-16 01:30:58 - testbed-management 2026-03-16 01:30:58.345749 | orchestrator | 2026-03-16 01:30:58 - clean up floating ips 2026-03-16 01:30:58.380325 | orchestrator | 2026-03-16 01:30:58 - 81.163.192.83 2026-03-16 01:30:58.799383 | orchestrator | 2026-03-16 01:30:58 - clean up routers 2026-03-16 01:30:58.941088 | orchestrator | 2026-03-16 01:30:58 - testbed 2026-03-16 01:31:00.309274 | orchestrator | ok: Runtime: 0:00:20.272479 2026-03-16 01:31:00.313841 | 2026-03-16 01:31:00.314013 | PLAY RECAP 2026-03-16 01:31:00.314143 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-16 01:31:00.314204 | 2026-03-16 01:31:00.459810 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-16 01:31:00.461039 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-16 01:31:01.244775 | 2026-03-16 01:31:01.244959 | PLAY [Cleanup play] 2026-03-16 01:31:01.264383 | 2026-03-16 01:31:01.264582 | TASK [Set cloud fact (Zuul deployment)] 2026-03-16 01:31:01.332632 | orchestrator | ok 2026-03-16 01:31:01.341485 | 2026-03-16 01:31:01.341647 | TASK [Set cloud fact (local deployment)] 2026-03-16 01:31:01.386352 | orchestrator | skipping: Conditional result was False 2026-03-16 01:31:01.402571 | 2026-03-16 01:31:01.402717 | TASK [Clean the cloud environment] 2026-03-16 01:31:02.679475 | orchestrator | 2026-03-16 01:31:02 - clean up servers 2026-03-16 01:31:03.291915 | orchestrator | 2026-03-16 01:31:03 - clean up keypairs 2026-03-16 01:31:03.312192 | orchestrator | 2026-03-16 01:31:03 - wait for servers to be gone 2026-03-16 01:31:03.357534 | orchestrator | 2026-03-16 01:31:03 - clean up ports 2026-03-16 01:31:03.440514 | orchestrator | 2026-03-16 01:31:03 - clean up volumes 2026-03-16 01:31:03.529490 | orchestrator | 2026-03-16 01:31:03 - disconnect routers 2026-03-16 01:31:03.568375 | orchestrator | 2026-03-16 01:31:03 - clean up subnets 2026-03-16 01:31:03.600067 | orchestrator | 2026-03-16 01:31:03 - clean up networks 2026-03-16 01:31:03.783254 | orchestrator | 2026-03-16 01:31:03 - clean up security groups 2026-03-16 01:31:03.828005 | orchestrator | 2026-03-16 01:31:03 - clean up floating ips 2026-03-16 01:31:03.856135 | orchestrator | 2026-03-16 01:31:03 - clean up routers 2026-03-16 01:31:04.444002 | orchestrator | ok: Runtime: 0:00:01.685645 2026-03-16 01:31:04.445766 | 2026-03-16 01:31:04.445852 | PLAY RECAP 2026-03-16 01:31:04.445904 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-16 01:31:04.445930 | 2026-03-16 01:31:04.575921 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-16 01:31:04.580609 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-16 01:31:05.379953 | 2026-03-16 01:31:05.380120 | PLAY [Base post-fetch] 2026-03-16 01:31:05.396074 | 2026-03-16 01:31:05.396215 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-16 01:31:05.462134 | orchestrator | skipping: Conditional result was False 2026-03-16 01:31:05.476025 | 2026-03-16 01:31:05.476236 | TASK [fetch-output : Set log path for single node] 2026-03-16 01:31:05.519293 | orchestrator | ok 2026-03-16 01:31:05.542017 | 2026-03-16 01:31:05.542214 | LOOP [fetch-output : Ensure local output dirs] 2026-03-16 01:31:06.089561 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/8c2e2d71d28f4d479ff9ce8d3bae7f94/work/logs" 2026-03-16 01:31:06.403329 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8c2e2d71d28f4d479ff9ce8d3bae7f94/work/artifacts" 2026-03-16 01:31:06.705460 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8c2e2d71d28f4d479ff9ce8d3bae7f94/work/docs" 2026-03-16 01:31:06.724998 | 2026-03-16 01:31:06.725151 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-16 01:31:07.714106 | orchestrator | changed: .d..t...... ./ 2026-03-16 01:31:07.714417 | orchestrator | changed: All items complete 2026-03-16 01:31:07.714459 | 2026-03-16 01:31:08.441926 | orchestrator | changed: .d..t...... ./ 2026-03-16 01:31:09.150045 | orchestrator | changed: .d..t...... ./ 2026-03-16 01:31:09.172957 | 2026-03-16 01:31:09.173180 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-16 01:31:09.209386 | orchestrator | skipping: Conditional result was False 2026-03-16 01:31:09.219599 | orchestrator | skipping: Conditional result was False 2026-03-16 01:31:09.231439 | 2026-03-16 01:31:09.231552 | PLAY RECAP 2026-03-16 01:31:09.231608 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-16 01:31:09.231634 | 2026-03-16 01:31:09.360900 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-16 01:31:09.361999 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-16 01:31:10.210811 | 2026-03-16 01:31:10.210997 | PLAY [Base post] 2026-03-16 01:31:10.226192 | 2026-03-16 01:31:10.226341 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-16 01:31:11.254278 | orchestrator | changed 2026-03-16 01:31:11.262232 | 2026-03-16 01:31:11.262351 | PLAY RECAP 2026-03-16 01:31:11.262442 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-16 01:31:11.262543 | 2026-03-16 01:31:11.394303 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-16 01:31:11.395457 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-16 01:31:12.176222 | 2026-03-16 01:31:12.176394 | PLAY [Base post-logs] 2026-03-16 01:31:12.187079 | 2026-03-16 01:31:12.187226 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-16 01:31:12.702979 | localhost | changed 2026-03-16 01:31:12.725151 | 2026-03-16 01:31:12.725369 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-16 01:31:12.765916 | localhost | ok 2026-03-16 01:31:12.772223 | 2026-03-16 01:31:12.772438 | TASK [Set zuul-log-path fact] 2026-03-16 01:31:12.793077 | localhost | ok 2026-03-16 01:31:12.814672 | 2026-03-16 01:31:12.814940 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-16 01:31:12.847194 | localhost | ok 2026-03-16 01:31:12.853568 | 2026-03-16 01:31:12.853753 | TASK [upload-logs : Create log directories] 2026-03-16 01:31:13.495759 | localhost | changed 2026-03-16 01:31:13.499654 | 2026-03-16 01:31:13.499791 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-16 01:31:14.147869 | localhost -> localhost | ok: Runtime: 0:00:00.004810 2026-03-16 01:31:14.154236 | 2026-03-16 01:31:14.154397 | TASK [upload-logs : Upload logs to log server] 2026-03-16 01:31:14.753074 | localhost | Output suppressed because no_log was given 2026-03-16 01:31:14.756048 | 2026-03-16 01:31:14.756271 | LOOP [upload-logs : Compress console log and json output] 2026-03-16 01:31:14.817815 | localhost | skipping: Conditional result was False 2026-03-16 01:31:14.825842 | localhost | skipping: Conditional result was False 2026-03-16 01:31:14.835575 | 2026-03-16 01:31:14.835693 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-16 01:31:14.890656 | localhost | skipping: Conditional result was False 2026-03-16 01:31:14.891109 | 2026-03-16 01:31:14.895815 | localhost | skipping: Conditional result was False 2026-03-16 01:31:14.909391 | 2026-03-16 01:31:14.909722 | LOOP [upload-logs : Upload console log and json output]